Go to primary content
Oracle® Communications OC-CNE Installation Guide
Release 1.0
F16979-01
Go To Table Of Contents
Contents

Previous
Previous

Install VMs for MySQL Nodes and Management Server

MySQL Cluster Topology

Virtual Machine

System Details

Prerequisites

Limitations and Expectations

References

Add bridge interface in all the hosts

The following procedure will create Virtual Machines (VM's) for installing MySQL Cluster nodes(Management nodes, Data nodes and SQL nodes) and creating Bastion Host VM on the each Storage Host, install Oracle Linux 7.5 on each VM, Procedure for MySQL Cluster installation on VM's are documented in the Database Tier Installer.

After all the hosts are provisioned using the os-install container, This procedure is used for creating the VM's in kubernetes Master nodes and Storage Hosts.

The below procedure details the steps required for installing the Bastion Hosts in Storage Hosts and MySQL Cluster node VM's. This procedure will require all the network information required for creating the VM's in different host servers like k8 Master Nodes, Storage Hosts.

Here VM's are created manually using the virt-install CLI tool and MySQL Cluster is installed using the db-install docker container as outlined in the OCCNE Database Tier Installer.

MySQL Cluster Manager is a distributed client/server application consisting of two main components. The MySQL Cluster Manager agent is a set of one or more agent processes that manage NDB Cluster nodes, and the MySQL Cluster Manager client provides a command-line interface to the agent's management functions.

MySQL Cluster Manager binary distributions that include MySQL NDB Cluster is used for installing MySQL Cluster Manager and MySQL NDB Cluster.

Steps for downloading the MySQL Cluster Manager from Oracle Software Delivery Cloud (OSDC) is found in Pre-flight Checklist.

In OCCNE 1.0, MySQL Cluster is installed as shown below in each cluster.

Figure B-25 MySQL Cluster Topology

MySQL Cluster is installed on Virtual machines, so the number of VM's required in the Storage Hosts and K8 Master Nodes are as shown below. Each k8 master node is used to create 1 VM for installing the MySQL Management node, so there are 3 MySQL management nodes in the MySQL Cluster. In each storage nodes, 4 VM's are created, i.e. 2 VM's for data nodes, 1 VM for SQL nodes and 1 VM for Management node VM.

No Of MySQL Management Nodes: 3

No Of Data Nodes: 4

No of SQL nodes: 2

No Of Bastion Hosts: 2

Below table shows VM's Created in Host servers:

Host Server No Of VM's Node Name
K8 Master node 1 1 MySQL management node 1
K8 Master node 2 1 MySQL management node 2
K8 Master node 3 1 MySQL management node 3
Storage Host 1 4 2 MySQL Data Node, 1 MySQL SQL node, 1 Bastion Host VM
Storage Host 2 4 2 MySQL Data Node,1 MySQL SQL node, 1 Bastion Host VM

VM Profile for Management and MySQL Cluster Nodes:
Node Type RAM HDD vcpus No Of Nodes
MySQL Management Node 8GB 300GB 4 3
MySQL Data Node 50GB 800GB 10 4
MySQL SQL Node 16GB 600GB 10 2
Bastion Host 8GB 120GB 4 2

IP Address, host names for VM's, Network information for creating the VM's are captured in OCCNE 1.0 Installation PreFlight Checklist

  1. All the hosts servers where VM's are created are captured in OCCNE Inventory File Preparation, The kubernetes master nodes are mentioned under [kube-master] and Storage Hosts are mentioned under [data_store].
  2. All Hosts should be provisioned using os-install container as defined and installed site hosts.ini file.
  3. Oracle Linux 7.5 iso (OracleLinux-7.5-x86_64-disc1.iso) is copied in /var/occne in the bastion host as specified in OCCNE Oracle Linux OS Installer procedure. This "/var/occne" path is shared to other hosts as specified in OCCNE Configuration of the Bastion Host.
  4. Host names and IP Address, network information assigned to these VM's should be captured in the Pre-flight Checklist.
  5. Bastion Host should be installed in Storage Host(RMS2).
  6. SSH keys configured in host servers by os-install container is stored in Management Node.
  7. Storage Host(RMS1) and Storage Host(RMS2) should be configured with same SSH keys.
  8. SSH keys should be configured in these VM's, so that db-install container can install these VM's with the MySQL Cluster software, these SSH keys are configured in the VM's using the kickstart files while creating VM's.

  1. Both Storage Hosts will have one Management server VM, where Docker is installed, All the host servers are provisioned from this Management server VM.
  2. Once both storage nodes and host servers are provisioned using the os-install container, VM's are created on kubernetes master and DB storage nodes.

Create a bridge interface on the Team(team0) interface for creating VM's.

Note:

Below steps should be performed to create the bridge interface (teambr0 and vlan5-br) in each storage hosts and bridge interface(teambr0) in each kubernetes Master nodes one at a time.

Table B-14 Procedure to install VMs for MySQL Nodes and Management Server

Step # Procedure Description
1. Add bridge interface in all the hosts

Create a bridge interface on the Team(team0) interface for creating VM's.

Note: Below steps should be performed to create the bridge interface(teambr0 and vlan5-br) in each storage hosts and bridge interface(teambr0) in each kubernetes Master nodes one at a time.

  1. Create a bridge interface(teambr0) on team0 interface for host network.
    1. Login to the Host server
      $ ssh admusr@10.75.216.XXX
      $ sudo su
    2. Note down the IP address, Gateway IP and DNS from the team0 interface, these details can be obtained from the "/etc/sysconfig/network-scripts/ifcfg-team0" interface file.

      IP ADDRESS

      GATEWAY IP

      DNS

    3. Create a new bridge interface(teambr0)
      $ nmcli c add type bridge con-name teambr0 ifname teambr0
    4. Modify this newly added bridge interface by assigning the IP address, Gateway IP and DNS from the team0 interface.
      $ nmcli c mod teambr0 ipv4.method manual
      \ipv4.addresses <IPADDRESS/PREFIX> ipv4.gateway <GATEWAYIP> \ipv4.dns <DNSIPS> bridge.stp
      no
    5. Edit "/etc/sysconfig/network-scripts/ifcfg-team0" interface file, add BRIDGE="teambr0" at the end of this file.
      $ echo'BRIDGE="teambr0"'>> /etc/sysconfig/network-scripts/ifcfg-team0
    6. Remove IPADDR, PREFIX, GATEWAY,DNS from "/etc/sysconfig/network-scripts/ifcfg-team0" interface file using the below command.
      $ sed -i'/IPADDR/d;/PREFIX/d;/GATEWAY/d;/DNS/d'/etc/sysconfig/network-scripts/ifcfg-team0
  2. Reboot the host server.
    $ reboot
  3. Create Signal bridge(vlan5-br) interface in Storage Hosts.
    $ sudo su
    $ ip link add link team0 name team0.5 type vlan id 5
    $ ip link set team0.5 up
    $ brctl addbr vlan5-br
    $ ip link set vlan5-br up
    $ brctl addif vlan5-br team0.5
  4. Create Signal bridge(vlan5-br) interface configuration file in Storage Hosts to keep these interfaces persistent over reboot, update below variables in ifcfg config files using the below sed commands.
    1. PHY_DEV
    2. VLAN_ID
    3. BRIDGE_NAME
      ============================================================================================================================
      Create ifcfg-team0.5 and ifcfg-vlan5-br file in /etc/sysconfig/network-scripts directory to keep these interfaces up over reboot.
      ============================================================================================================================
      [root@db-2 network-scripts]# vi /tmp/ifcfg-team0.VLAN_ID
      VLAN=yes
      TYPE=Vlan
      PHYSDEV={PHY_DEV}
      VLAN_ID={VLAN_ID}
      REORDER_HDR=yes
      GVRP=no
      MVRP=no
      PROXY_METHOD=none
      BROWSER_ONLY=no
      DEFROUTE=no
      IPV4_FAILURE_FATAL=no
      DEVICE={PHY_DEV}.{VLAN_ID}
      NAME={PHY_DEV}.{VLAN_ID}
      ONBOOT=yes
      BRIDGE={BRIDGE_NAME}
      NM_CONTROLLED=no
       
       
      [root@db-2 network-scripts]# vi /tmp/ifcfg-BRIDGE_NAME
      STP=no
      BRIDGING_OPTS=priority=32768
      TYPE=Bridge
      PROXY_METHOD=none
      BROWSER_ONLY=no
      BOOTPROTO=none
      DEFROUTE=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_FAILURE_FATAL=no
      IPV6_ADDR_GEN_MODE=stable-privacy
      NAME={BRIDGE_NAME}
      DEVICE={BRIDGE_NAME}
      ONBOOT=yes
      NM_CONTROLLED=no
       
       
      $ cp /tmp/ifcfg-BRIDGE_NAME /etc/sysconfig/network-scripts/ifcfg-vlan5-br
      $ chmod 644 /etc/sysconfig/network-scripts/ifcfg-vlan5-br
      $ sed -i 's/{BRIDGE_NAME}/vlan5-br/g' /etc/sysconfig/network-scripts/ifcfg-vlan5-br
       
      $ cp /tmp/ifcfg-team0.VLAN_ID /etc/sysconfig/network-scripts/ifcfg-team0.5
      $ chmod 644 /etc/sysconfig/network-scripts/ifcfg-team0.5
      $ sed -i 's/{BRIDGE_NAME}/vlan5-br/g' /etc/sysconfig/network-scripts/ifcfg-team0.5
      $ sed -i 's/{PHY_DEV}/team0/g' /etc/sysconfig/network-scripts/ifcfg-team0.5
      $ sed -i 's/{VLAN_ID}/5/g' /etc/sysconfig/network-scripts/ifcfg-team0.5
  5. Reboot the host
    $ reboot

Perform above steps in all the K8 Master nodes and Storage Hosts, where MySQL VM's are created.

2. Configure SSH keys in Storage Host(RMS2)

After bastion host is installed and configured and performing the OS installation as configured in the host inventory file, the public key configured in Storage Host(RMS2) is different than the public key configured in every other host.

Following steps are performed to configure the public key in Storage Host(RMS2).

  1. Login to the bastion host, and make sure the SSH keys are present in "/var/occne/<cluster_name>" directory, which are generated by os-install container. The public key configured in every other host should be configured in the Storage Host(

    RMS1

    ).
  2. Change to "/var/occne/<cluster_name>" directory.
    $ cd /var/occne/<cluster_name>;
  3. Copy the public key in Storage Host(RMS2).update <cluster_name> and < RMS2_PRIVATE_KEY> in below command and execute to configure SSH key.
    $ cat /var/occne/<cluster_name>/.ssh/occne_id_rsa.pub | ssh -i <RMS2_PRIVATE_KEY> admusr@172.16.3.6"mkdir -p /home/admusr/.ssh; cat >>
    /home/admusr/.ssh/authorized_keys"
  4. Verify whether we are able to login to Storage Host(RMS2) using SSH key, Since public key is configured in Storage Host(RMS2), it will not prompt for password.
    $ ssh -i .ssh/occne_id_rsa admusr@10.75.216.XXX
3. Mount Linux ISO

Oracle Linux 7.5 iso(OracleLinux-7.5-x86_64-disc1.iso) is present in "/var/occne" nfs path in Bastion host, This path should be mounted in all the kubernetes master nodes and Storage Hosts for creating VM's.

  1. Login to host.
    $ ssh admusr@10.75.XXX.XXX
    $ sudo su
  2. Create a mount directory in a host.
    $ mkdir -p /mnt/nfsoccne
    $ chmod -R 755 /mnt/nfsoccne
  3. Mount the nfs path from bastion host where Oracle Linux 7.5 iso(OracleLinux-7.5-x86_64-disc1.iso) is present.
    $ mount -t nfs <BASTION_HOST_IP>:/var/occne /mnt/nfsoccne

Perform above steps in all the K8 Master nodes and Storage Hosts, where MySQL VM's are created.

4. Creating Logical Volumes in Storage Hosts

For Management VM's, MySQL Management node VM's, default Logical volume('ol') is used, For data nodes, Two Logical volumes will be created and each Logical volume will be assigned to each Data Node VM.

Below procedures are used for creating the Logical Volumes in Storage Hosts, Each storage Hosts contains 2 x 1.8 TB HDD and 2 x 960GB SSD drives, Logical volumes will be created using SSD drives.

  1. Login to Storage Host
    $ sudo su
  2. Set partition types to "Linux LVM(8e)" for /dev/sdc and /dev/sdd using fdisk command.
    $ fdisk /dev/sdc
    Welcome to fdisk (util-linux 2.23.2).
     
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
     
    Device does not contain a recognized partition table
    Building a new DOS disklabel with disk identifier 0x5d3d670f.
     
    The device presents a logical sector size that is smaller than
    the physical sector size. Aligning to a physical sector (or optimal
    I/O) size boundary is recommended, or performance may be impacted.
     
    Command (m for help): n
    Partition type:
       p   primary (0 primary, 0 extended, 4 free)
       e   extended
    Select (default p): p
    Partition number (1-4, default 1): 1
    First sector (2048-1875385007, default 2048):
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-1875385007, default 1875385007):
    Using default value 1875385007
    Partition 1 of type Linux and of size 894.3 GiB is set
     
    Command (m for help): t
    Selected partition 1
    Hex code (type L to list all codes): 8e
    Changed type of partition 'Linux' to 'Linux LVM'
     
    Command (m for help): p
     
    Disk /dev/sdc: 960.2 GB, 960197124096 bytes, 1875385008 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk label type: dos
    Disk identifier: 0x5d3d670f
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1            2048  1875385007   937691480   8e  Linux LVM
    Command (m for help): w
    The partition table has been altered!
    Calling ioctl() to re-read partition table.
    Syncing disks.
    Perform same steps for /dev/sdd to set the partition type to Linux LVM(8e).
    $ fdisk /dev/sdd
    Welcome to fdisk (util-linux 2.23.2).
     
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
     
    Device does not contain a recognized partition table
    Building a new DOS disklabel with disk identifier 0xa241f6b9.
     
    The device presents a logical sector size that is smaller than
    the physical sector size. Aligning to a physical sector (or optimal
    I/O) size boundary is recommended, or performance may be impacted.
     
    Command (m for help): n
    Partition type:
       p   primary (0 primary, 0 extended, 4 free)
       e   extended
    Select (default p): p
    Partition number (1-4, default 1): 1
    First sector (2048-1875385007, default 2048):
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-1875385007, default 1875385007):
    Using default value 1875385007
    Partition 1 of type Linux and of size 894.3 GiB is set
     
    Command (m for help): t
    Selected partition 1
    Hex code (type L to list all codes): 8e
    Changed type of partition 'Linux' to 'Linux LVM'
     
    Command (m for help): p
     
    Disk /dev/sdd: 960.2 GB, 960197124096 bytes, 1875385008 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk label type: dos
    Disk identifier: 0xa241f6b9
     
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdd1            2048  1875385007   937691480   8e  Linux LVM
     
    Command (m for help): w
    The partition table has been altered!
     
    Calling ioctl() to re-read partition table.
    Syncing disks.
  3. Create physical volumes.
    $ pvcreate /dev/sdc1
      Physical volume "/dev/sdc1" successfully created.
    $ pvcreate /dev/sdd1
      Physical volume "/dev/sdd1" successfully created.
  4. Create separate volume groups for each physical volumes.
    $ vgcreate strip_vga /dev/sdc1
      Volume group "strip_vga" successfully created
    $ vgcreate strip_vgb /dev/sdd1
      Volume group "strip_vgb" successfully created
  5. Create logical volume's for data nodes
    $ lvcreate -L 900G -n strip_lva strip_vga
     Logical volume "strip_lva" created.
    $ lvextend -l +100%FREE /dev/strip_vga/strip_lva
     
    $ lvcreate -L 900G -n strip_lvb strip_vgb
     Logical volume "strip_lvb" created.
    $ lvextend -l +100%FREE /dev/strip_vgb/strip_lvb

These logical volumes are used for creating the MySQL Data node VM's.

5. Copy kickstart files For creating MySQL Node VM's, kickstart template files are used. Download the kickstart files from OHC.
6. Steps for creating Bastion Host

Bastion host is used for host provisioning, MySQL Cluster, installing the hosts with kubernetes, common services. virt-install tool is used for creating the Bastion Host.

Bastion hosts are already created during the installation procedure, so this procedure is required only in case of re installation of the bastion host in Storage Hosts.

  1. Login to the Storage Host
  2. Follow the procedure OCCNE Installation of the Bastion Host for creating the Management VM in RMS2.
  3. Follow the procedure Configuration of the Bastion Host for configuring the Management VM.

Repeat these steps for creating other bastion host in other Storage Hosts.

7. Steps for creating MySQL Management Node VM

For installing the MySQL NDB cluster, 3 MySQL Management node VM's will be installed. In each kubernetes master nodes, 1 MySQL Management node VM will be created.

Perform below steps to install MySQL Management Node VM in kubernetes Master Node.

  1. Login in to the kubernetes Master Node host and make sure the bridge interface(teambr0) is added in this Host. Follow steps "2. Add bridge interface in all the hosts" for adding the bridge interface(teambr0) if it doesn't exists.
    [root@k8s-1 admusr]# ifconfig teambr0
    teambr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 10.75.216.68  netmask 255.255.255.128  broadcast 10.75.216.127
            inet6 2606:b400:605:b827:1330:2c49:6b7e:8ffe  prefixlen 64  scopeid 0x0<global>
            inet6 fe80::2738:43b3:347:cd43  prefixlen 64  scopeid 0x20<link>
            ether b4:b5:2f:6d:22:30  txqueuelen 0  (Ethernet)
            RX packets 217597  bytes 19182440 (18.2 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 9193  bytes 1328986 (1.2 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  2. Mount the nfs path from the management VM in this K8's Master Node, follow steps "4. Mount Linux ISO" for mounting the nfs path /var/occne in the host servers.
  3. Create Kickstart Template file for creating MySQL Management Node VM
    1. Change to root user
      $ sudo su
    2. Copy DB_MGM_TEMPLATE.ks in /tmp directory in k8 master node host server

    3. Copy DB_MGM_TEMPLATE.ks to DB_MGMNODE_1.ks
      $ cp /tmp/DB_MGM_TEMPLATE.ks /tmp/DB_MGMNODE_1.ks
    4. Update the kickstart file(DB_MGMNODE_1.ks) using the following commands to set the following file variables
      1. VLAN3_IPADDRESS: IP address assigned to this VM as configured hosts.ini inventory file(created using procedure: OCCNE Inventory File Preparation).
      2. VLAN3_GATEWAYIP: Gateway IP address for IP address configured in hosts.ini inventory file.
      3. VLAN3_NETMASKIP: Netmask for this network.
      4. NAMESERVERIPS: IP address of the DNS servers, multiple nameservers should be separated by comma as shown below. For ex: 10.10.10.1,10.10.10.2 if there are no name servers to configure then remove this variable from the kickstart file.sed -i 's/--nameserver=NAMESERVERIPS//' /tmp/DB_MGMNODE_1.ks
      5. NODEHOSTNAME: host name of the VM as configured in hosts.ini inventory file.
      6. NTPSERVERIPS

        : IP address of the NTP servers, multiple NTP servers should be separated by comma as shown below. For ex: 10.10.10.3,10.10.10.4
      7. HTTP_PROXY: http proxy for yum, if not required then comment "echo "proxy=HTTP_PROXY" >> /etc/yum.conf" line in the kickstart file. sed -i 's/echo "proxy=HTTP_PROXY" >> \/etc\/yum.conf/#echo "proxy=HTTP_PROXY" >> \/etc\/yum.conf/' /tmp/DB_MGMNODE_1.ks
      8. PUBLIC_KEY: SSH public key configured in host(/home/admusr/.ssh/authorized_keys) is used to update the kickstart file, So that VM can be accessed using the same private key generated using host provisioning.

        Note: HTTP_PROXY in the commands below require only the URL as the "http://" is provided in the sed command.

        $ sed -i 's/VLAN3_GATEWAYIP/ACTUAL_GATEWAY_IP/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/VLAN3_IPADDRESS/ACTUAL_IPADDRESS/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/NAMESERVERIPS/ACTUAL_NAMESERVERIPS/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/VLAN3_NETMASKIP/ACTUAL_NETMASKIP/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/NODEHOSTNAME/ACTUAL_NODEHOSTNAME/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/NTPSERVERIPS/ACTUAL_NTPSERVERIPS/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/HTTP_PROXY/http:\/\/ACTUAL_HTTP_PROXY/g' /tmp/DB_MGMNODE_1.ks
        $ sed -e '/PUBLIC_KEY/{' -e 'r  /home/admusr/.ssh/authorized_keys' -e 'd' -e '}' -i /tmp/DB_MGMNODE_1.ks
        for Ex: Below commands show how to update these values in the /tmp/DB_MGMNODE_1.ks file.
        $ sed -i 's/VLAN3_GATEWAYIP/172.16.3.1/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/VLAN3_IPADDRESS/172.16.3.91/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/NAMESERVERIPS/172.16.3.4/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/VLAN3_NETMASKIP/255.255.255.0/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/NODEHOSTNAME/db-mgm2.rainbow.lab.us.oracle.com/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/NTPSERVERIPS/172.16.3.4/g' /tmp/DB_MGMNODE_1.ks
        $ sed -i 's/HTTP_PROXY/http:\/\/www-proxy.us.oracle.com:80/g' /tmp/DB_MGMNODE_1.ks
        $ sed -e '/PUBLIC_KEY/{' -e 'r  /home/admusr/.ssh/authorized_keys' -e 'd' -e '}' -i /tmp/DB_MGMNODE_1.ks
  4. After updating DB_MGMNODE_1.ks kickstart file, use below command to start the creation of MySQL Management node VM. This command will use the "/tmp/DB_MGMNODE_1.ks" kickstart file for creating the VM and configuring the MySQL Management node VM, update <NDBMGM_NODE_NAME> as specified in hosts.ini invetory file(created using procedure: OCCNE Inventory File Preparation) and <NDBMGM_NODE_DESC>in the below command.
    $ virt-install --name <NDBMGM_NODE_NAME> --memory 8192 --memorybacking hugepages=yes --vcpus 4 \
                         --metadata description=<NDBMGM_NODE_DESC> --autostart --location /mnt/nfsoccne/OracleLinux-7.5-x86_64-disc1.iso \
                         --initrd-inject=/tmp/DB_MGMNODE_1.ks --os-variant ol7.5 \
                         --extra-args "ks=file:/DB_MGMNODE_1.ks console=tty0 console=ttyS0,115200" \
                         --disk path=/var/lib/libvirt/images/<NDBMGM_NODE_NAME>.qcow2,size=120 \
                         --network bridge=teambr0 --graphics none
    For Ex: After updating the <NDBMGM_NODE_NAME>, <NDBMGM_NODE_DESC> in the above command
    [root@k8s-1 admusr]# virt-install --name ndbmgmnodea1 --memory 8192 --memorybacking hugepages=yes --vcpus 4 \
                         --metadata description=ndbmgmnodea_vm1 --autostart --location /mnt/nfsoccne/OracleLinux-7.5-x86_64-disc1.iso \
                         --initrd-inject=/tmp/DB_MGMNODE_1.ks --os-variant ol7.5 \
                         --extra-args "ks=file:/DB_MGMNODE_1.ks console=tty0 console=ttyS0,115200" \
                         --disk path=/var/lib/libvirt/images/ndbmgmnodea1.qcow2,size=120 \
                         --network bridge=teambr0 --graphics none
    Starting install...
    Retrieving file .treeinfo...                                                                                                         
    Retrieving file vmlinuz...                                                                                                           
    Retrieving file initrd.img...                                                                                                        
    Allocating 'ndbmgmnodea1.qcow2'                                                                                                      
    Connected to domain ndbmgmnodea1
    Escape character is ^]
  5. After Installation is complete, prompted for login.
  6. To Exit from the virsh console Press CTRL+ '5' keys, after logout from VM.
    $ exit
    press CTRL+'5'keys to exit from the virsh console.

Repeat these steps for creating remaining MySQL Management node VM's in kubernetes Master Nodes.

8. Steps for creating MySQL Data Node VM

MySQL Data Node VM's are created in Storage Hosts, each Storage Host contains 2 x 1.8 TB HDD and 2 x 960GB SSD drives.

For data nodes. the Logical volumes are created using the SSD drives, Each SSD Drive is assigned to one Logical Volume. each data node will use one Logical Volume.

Procedure for creating the Logical Volumes is specified in "5. Creating Logical Volumes in Storage Hosts" above.

To create these VM's DB Data Node Kickstart template file(DB_DATANODE_TEMPLATE.ks) is used to generate the individual kickstart files for each MySQL Data Node VM's, These kickstart files are updated with all the required information(Network, admin user, host names, ntp servers, DNS servers), host name, SSH keys and so on.

  1. Login in to the Storage Host
    1. Check if the bridge interface(teambr0) present, if not follow the steps "2. Add bridge interface in all the hosts".
      $ ifconfig teambr0
      teambr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
              inet 10.75.216.68  netmask 255.255.255.128  broadcast 10.75.216.127
              inet6 2606:b400:605:b827:1330:2c49:6b7e:8ffe  prefixlen 64  scopeid 0x0<global>
              inet6 fe80::2738:43b3:347:cd43  prefixlen 64  scopeid 0x20<link>
              ether b4:b5:2f:6d:22:30  txqueuelen 0  (Ethernet)
              RX packets 217597  bytes 19182440 (18.2 MiB)
              RX errors 0  dropped 0  overruns 0  frame 0
              TX packets 9193  bytes 1328986 (1.2 MiB)
              TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    2. Check if the Logical Volumes(strip_lva, strip_lvb) exists in the Storage hosts, if not follow steps in "5. Creating Logical Volumes in Storage Hosts"
      $ lvs
        LV        VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
        root      ol        -wi-ao----  20.00g
        var       ol        -wi-ao----  <1.62t
        strip_lva strip_vga -wi-ao---- 894.25g
        strip_lvb strip_vgb -wi-ao---- 894.25g
    3. Mount nfs path to in the host, follow steps "4. Mount Linux ISO" for mounting the image from the shared nfs path.
  2. Create Kickstart file for creating MySQL Data Node VM.
    1. Change to root user
      $ sudo su
    2. Copy DB_DATANODE_TEMPLATE.ks in /tmp directory in DB Storage node host server
    3. Copy DB_DATANODE_TEMPLATE.ks to DATANODEVM_1.ks
      $ cp /tmp/DB_DATANODE_TEMPLATE.ks /tmp/DATANODEVM_1.ks
    4. Update the kickstart file(DATANODEVM_1.ks) using the following commands to set the following file variables
      1. VLAN3_IPADDRESS: IP address assigned to this VM as configured hosts.ini inventory file(created using procedure: OCCNE Inventory File Preparation).
      2. VLAN3_GATEWAYIP: Gateway IP address for IP address configured in hosts.ini inventory file.
      3. VLAN3_NETMASKIP: Netmask for this network.
      4. NAMESERVERIPS: IP address of the DNS servers, multiple nameservers should be separated by comma as shown below. For ex: 10.10.10.1,10.10.10.2 if there are no name servers to configure then remove this variable from the kickstart file.sed -i 's/--nameserver=NAMESERVERIPS//' /tmp/DB_MGMNODE_1.ks
      5. NODEHOSTNAME: host name of the VM as configured in hosts.ini inventory file.
      6. NTPSERVERIPS: IP address of the NTP servers, multiple NTP servers should be separated by comma as shown below. For ex: 10.10.10.3,10.10.10.4
      7. HTTP_PROXY: http proxy for yum, if not required then comment "echo "proxy=HTTP_PROXY" >> /etc/yum.conf" line in the kickstart file. sed -i 's/echo "proxy=HTTP_PROXY" >> \/etc\/yum.conf/#echo "proxy=HTTP_PROXY" >> \/etc\/yum.conf/' /tmp/DB_MGMNODE_1.ks
      8. PUBLIC_KEY: SSH public key configured in host(/home/admusr/.ssh/authorized_keys) is used to update the kickstart file, So that VM can be accessed using the same private key generated using host provisioning.Note: HTTP_PROXY in the commands below require only the URL as the "http://" is provided in the sed command.
      $ sed -i 's/VLAN3_GATEWAYIP/ACTUAL_GATEWAY_IP/g' /tmp/DATANODEVM_1.ks
      $ sed -i 's/VLAN3_IPADDRESS/ACTUAL_IPADDRESS/g' /tmp/DATANODEVM_1.ks
      $ sed -i 's/NAMESERVERIPS/ACTUAL_NAMESERVERIPS/g' /tmp/DATANODEVM_1.ks
      $ sed -i 's/VLAN3_NETMASKIP/ACTUAL_NETMASKIP/g' /tmp/DATANODEVM_1.ks
      $ sed -i 's/NODEHOSTNAME/ACTUAL_NODEHOSTNAME/g' /tmp/DATANODEVM_1.ks
      $ sed -i 's/NTPSERVERIPS/ACTUAL_NTPSERVERIPS/g' /tmp/DATANODEVM_1.ks
      $ sed -i 's/HTTP_PROXY/ACTUAL_HTTP_PROXY/g' /tmp/DATANODEVM_1.ks
      $ sed -e '/PUBLIC_KEY/{' -e 'r  /home/admusr/.ssh/authorized_keys' -e 'd' -e '}' -i /tmp/DATANODEVM_1.ks
    Similarly generate

    DATANODEVM_2.ks, DATANODEVM_3.ks, DATANODEVM_4.ks kickstart files, which are used for creating MySQL Data node VM's.

  3. After updating DATANODEVM_1.ks kickstart file, use below command to start the creation of MySQL Data node VM. This command will use the "/tmp/DATANODEVM_1.ks" kickstart file for creating the VM and configuring the MySQL Data node VM, update <DATANODEVM_NAME> as specified in hosts.ini invetory file(created using procedure: OCCNE Inventory File Preparation) and <DATANODEVM_NODE_DESC>in the below command.

    For Creating ndbdatanodea1 Data Node VM in DB Storage Node 1:

    $ virt-install --name <DATANODEVM_NAME> --memory 51200 --memorybacking hugepages=yes  --vcpus 10 \
                   --metadata description=<DATANODEVM_DESC> --autostart --location /mnt/nfsoccne/OracleLinux-7.5-x86_64-disc1.iso \
                   --initrd-inject=/tmp/DATANODEVM_1.ks --os-variant=ol7.5 \
                   --extra-args="ks=file:/DATANODEVM_1.ks console=tty0 console=ttyS0,115200n8"  \
                   --disk path=/var/lib/libvirt/images/<DATANODEVM_NAME>.qcow2,size=100 --disk path=/dev/mapper/strip_vga-strip_lva \
                   --network bridge=teambr0 --nographics
    For Creating ndbdatanodea2 Data Node VM in DB Storage Node 2:
    $ virt-install --name <DATANODEVM_NAME> --memory 51200 --memorybacking hugepages=yes  --vcpus 10 \
                   --metadata description=<DATANODEVM_DESC> --autostart  --location /mnt/nfsoccne/OracleLinux-7.5-x86_64-disc1.iso \
                   --initrd-inject=/tmp/DATANODEVM_2.ks --os-variant=ol7.5 \
                   --extra-args="ks=file:/DATANODEVM_2.ks console=tty0 console=ttyS0,115200n8"  \
                   --disk path=/var/lib/libvirt/images/<DATANODEVM_NAME>.qcow2,size=100 --disk path=/dev/mapper/strip_vga-strip_lva \
                   --network bridge=teambr0 --nographics
    For Creating ndbdatanodea3 Data Node VM in DB Storage Node 1
    $ virt-install --name <DATANODEVM_NAME> --memory 51200 --memorybacking hugepages=yes  --vcpus 10 \
                   --metadata description=<DATANODEVM_DESC> --autostart  --location /mnt/nfsoccne/OracleLinux-7.5-x86_64-disc1.iso \
                   --initrd-inject=/tmp/DATANODEVM_3.ks --os-variant=ol7.5 \
                   --extra-args="ks=file:/DATANODEVM_3.ks console=tty0 console=ttyS0,115200n8"  \
                   --disk path=/var/lib/libvirt/images/<DATANODEVM_NAME>.qcow2,size=100 --disk path=/dev/mapper/strip_vgb-strip_lvb \
                   --network bridge=teambr0 --nographics
    For Creating ndbdatanodea4 Data Node VM in DB Storage Node 2
    $ virt-install --name <DATANODEVM_NAME> --memory 51200 --memorybacking hugepages=yes  --vcpus 10 \
                   --metadata description=<DATANODEVM_DESC> --autostart  --location /mnt/nfsoccne/OracleLinux-7.5-x86_64-disc1.iso \
                   --initrd-inject=/tmp/DATANODEVM_4.ks --os-variant=ol7.5 \
                   --extra-args="ks=file:/DATANODEVM_4.ks console=tty0 console=ttyS0,115200n8"  \
                   --disk path=/var/lib/libvirt/images/<DATANODEVM_NAME>.qcow2,size=100 --disk path=/dev/mapper/strip_vgb-strip_lvb \
                   --network bridge=teambr0 --nographics
  4. After Installation is complete, prompt for login.

  5. To Exit from the virsh console Press CTRL+ '5' keys, after logout from VM.
    $ exit
    press CTRL+'5' keys to exit from the virsh console.

Repeat these steps for creating all the MySQL Data node VM's in Storage Hosts.

9. Steps for creating MySQL SQL Node VM

Along with Data node VM's in Storage Hosts, MySQL SQL Node VM's are created in Storage Hosts, One SQL node is created in each of Storage Host.

To create these MySQL SQL node VM's, DB Node kickstart template file(DB_SQL_TEMPLATE.ks) is used, SQL Node kickstart template file is updated with all the required information(Network, admin user, host names, ntp servers, DNS servers and so on.

  1. Login in to the Storage Host and make sure the bridge interface(teambr0) and bridge interface(vlan5-br) on VLAN 5 is present, if not follow the steps "Add bridge interface in all the hosts".
    $ ifconfig teambr0
    teambr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 10.75.216.68  netmask 255.255.255.128  broadcast 10.75.216.127
            inet6 2606:b400:605:b827:1330:2c49:6b7e:8ffe  prefixlen 64  scopeid 0x0<global>
            inet6 fe80::2738:43b3:347:cd43  prefixlen 64  scopeid 0x20<link>
            ether b4:b5:2f:6d:22:30  txqueuelen 0  (Ethernet)
            RX packets 217597  bytes 19182440 (18.2 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 9193  bytes 1328986 (1.2 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
     
    $ ifconfig vlan5-br
    vlan5-br: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::645e:5cff:febf:fbd6  prefixlen 64  scopeid 0x20<link>
            ether 48:df:37:7a:40:48  txqueuelen 1000  (Ethernet)
            RX packets 150600  bytes 7522366 (7.1 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 7  bytes 626 (626.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  2. Create Kickstart file for creating MySQL SQL Node VM
    1. Change to root user
      $ sudo su
    2. Copy DB_SQL_TEMPLATE.ks in /tmp directory in Storage Host
    3. Copy DB_SQL_TEMPLATE.ks to DB_SQLNODE_1.ks
      $ cp /tmp/DB_SQL_TEMPLATE.ks /tmp/DB_SQLNODE_1.ks
    4. Update the kickstart file(DB_MGMNODE_1.ks) using the following commands to set the following file variablesUpdate DB_SQLNODE_1.ks file with VLAN3_IPADDRESS, VLAN3_NAMESERVERIPS, VLAN3_NETMASKIP, NETMASKIP, NODEHOSTNAME, SIGNAL_GATEWAYIP, SIGNAL_IPADDRESS, SIGNAL_NETMASKIP, NODEHOSTNAME, NTPSERVERIPS and HTTP_PROXY, Replace ACTUAL_* variables in below commands with actual values which will update SQLNODEVM_1.ks file.Update the kickstart file(DB_MGMNODE_1.ks) using the following commands to set the following file variables
      1. VLAN3_IPADDRESS: IP address assigned to this VM as configured hosts.ini inventory file(created using procedure: OCCNE Inventory File Preparation).
      2. VLAN3_NETMASKIP: Netmask for this network.
      3. SIGNAL_VLAN5_IPADDRESS: Signalling IP address assigned to this VM.
      4. SIGNAL_VLAN5_GATEWAYIP: Gateway IP address for signalling network.
      5. SIGNAL_VLAN5_NETMASKIP: Netmask for this signalling network.
      6. NAMESERVERIPS: IP address of the DNS servers, multiple nameservers should be separated by comma as shown below. For ex: 10.10.10.1,10.10.10.2 if there are no name servers to configure then remove this variable from the kickstart file.sed -i 's/--nameserver=NAMESERVERIPS//' /tmp/DB_MGMNODE_1.ks
      7. NODEHOSTNAME: host name of the VM as configured in hosts.ini inventory file.
      8. NTPSERVERIPS: IP address of the NTP servers, multiple NTP servers should be separated by comma as shown below. For ex: 10.10.10.3,10.10.10.4
      9. HTTP_PROXY: http proxy for yum, if not required then comment "echo "proxy=HTTP_PROXY" >> /etc/yum.conf" line in the kickstart file. sed -i 's/echo "proxy=HTTP_PROXY" >> \/etc\/yum.conf/#echo "proxy=HTTP_PROXY" >> \/etc\/yum.conf/' /tmp/DB_MGMNODE_1.ks
      10. PUBLIC_KEY: SSH public key configured in host(/home/admusr/.ssh/authorized_keys) is used to update the kickstart file, So that VM can be accessed using the same private key generated using host provisioning.

        Note: HTTP_PROXY in the commands below require only the URL as the "http://" is provided in the sed command.

      $ sed -i 's/VLAN3_IPADDRESS/ACTUAL_VLAN3_IPADDRESS/g' /tmp/DB_SQLNODE_1.ks
      $ sed -i 's/VLAN3_NETMASKIP/ACTUAL_VLAN3_NETMASKIP/g' /tmp/DB_SQLNODE_1.ks
      $ sed -i 's/NAMESERVERIPS/ACTUAL_NAMESERVERIPS/g' /tmp/DB_SQLNODE_1.ks
      $ sed -i 's/SIGNAL_VLAN5_GATEWAYIP/ACTUAL_SIGNAL_VLAN5_GATEWAYIP/g' /tmp/DB_SQLNODE_1.ks
      $ sed -i 's/SIGNAL_VLAN5_IPADDRESS/ACTUAL_SIGNAL_VLAN5_IPADDRESS/g' /tmp/DB_SQLNODE_1.ks
      $ sed -i 's/SIGNAL_VLAN5_NETMASKIP/ACTUAL_SIGNAL_VLAN5_NETMASKIP/g' /tmp/DB_SQLNODE_1.ks
      $ sed -i 's/NODEHOSTNAME/ACTUAL_NODEHOSTNAME/g' /tmp/DB_SQLNODE_1.ks
      $ sed -i 's/NTPSERVERIPS/ACTUAL_NTPSERVERIPS/g' /tmp/DB_SQLNODE_1.ks
      $ sed -i 's/HTTP_PROXY/ACTUAL_HTTP_PROXY/g' /tmp/DB_SQLNODE_1.ks
      $ sed -e '/PUBLIC_KEY/{' -e 'r  /home/admusr/.ssh/authorized_keys' -e 'd' -e '}' -i /tmp/DB_SQLNODE_1.ks
  3. After updating DB_SQLNODE_1.ks kickstart file, use below command to start the creation of MySQL SQL node VM. This command will use the "/tmp/DB_SQLNODE_1.ks" kickstart file for creating the VM and configuring the MySQL SQL node VM, update <NDBSQL_NODE_NAME> as specified in hosts.ini invetory file(created using procedure: OCCNE Inventory File Preparation) and <NDBSQL_NODE_DESC>in the below command.
    $ virt-install --name <NDBSQL_NODE_NAME> --memory 16384 --memorybacking hugepages=yes --vcpus 10 \
                                 --metadata description=<NDBSQL_NODE_DESC> --autostart --location /mnt/nfsoccne/OracleLinux-7.5-x86_64-disc1.iso \
                                 --initrd-inject=/tmp/DB_SQLNODE_1.ks --os-variant ol7.5 \
                                 --extra-args "ks=file:/DB_SQLNODE_1.ks console=tty0 console=ttyS0,115200" \
                                 --disk path=/var/lib/libvirt/images/<NDBSQL_NODE_NAME>.qcow2,size=600 \
                                 --network bridge=teambr0 --network bridge=vlan5-br --graphics none
  4. After Installation is complete, prompt for login.
  5. To Exit from the virsh console Press CTRL+ '5' keys, after logout from VM.
    $ exit
    press CTRL+'5' keys to exit from the virsh console.

Repeat these steps for creating MySQL SQL node VM's in Storage Hosts.

10. Unmount Linux ISO

After all the MySQL node VM's are created in all kubernetes master nodes and Storage Hosts, unmount "/mnt/nfsoccne" and delete this directory.

  1. Login to host.
  2. Unmount "/mnt/nfsoccne" in host
    $ umount /mnt/nfsoccne
  3. Delete directory
     $ rm -rf /mnt/nfsoccne

Perform above steps in all the K8 Master nodes and Storage Hosts.

11. Open firewall ports

For installing MySQL Cluster in these VM's we need to open the ports in firewall in MySQL Management Node VM's, Data Node VM's and SQL Node VM's.

Below table lists the ports to be opened in firewall:

Node Name Ports
MySQL Management Node 1862 18620 1186
MySQL Data Node 1862 18620 2202
MySQL SQL Node 1862 18620 3306

Firewall commands to open these ports are as follows.

  1. For MySQL Management Node VM's
    $ sudo su
    $ firewall-cmd --zone=public --permanent --add-port=1862/tcp
    $ firewall-cmd --zone=public --permanent --add-port=18620/tcp
    $ firewall-cmd --zone=public --permanent --add-port=1186/tcp
    $ firewall-cmd --reload
  2. For MySQL Data Node VM's
    $ sudo su
    $ firewall-cmd --zone=public --permanent --add-port=1862/tcp
    $ firewall-cmd --zone=public --permanent --add-port=18620/tcp
    $ firewall-cmd --zone=public --permanent --add-port=2202/tcp
    $ firewall-cmd --reload
  3. For MySQL SQL Node VM's
    $ sudo su
    $ firewall-cmd --zone=public --permanent --add-port=1862/tcp
    $ firewall-cmd --zone=public --permanent --add-port=18620/tcp
    $ firewall-cmd --zone=public --permanent --add-port=3306/tcp
    $ firewall-cmd --reload