Note:

Use an OCI Dynamic Inventory with Oracle Linux Automation Engine

Introduction

Oracle Linux Automation Engine, an open-source software for provisioning and configuration management, utilizes an inventory file to operate against managed nodes or hosts within your infrastructure. This inventory file contains a list of servers, their IP addresses, and other optional connection information.

A static inventory file works well if your infrastructure hardly changes.

However, your infrastructure is likely in constant flux when using the cloud. Therefore, it would be beneficial to have your inventory dynamically updated as hosts are added and removed.

Objectives

In this tutorial, you’ll learn to:

Prerequisites

Deploy Oracle Linux Automation Engine

Note: If running in your own tenancy, read the linux-virt-labs GitHub project README.md and complete the prerequisites before deploying the lab environment.

  1. Open a terminal on the Luna Desktop.

  2. Clone the linux-virt-labs GitHub project.

    git clone https://github.com/oracle-devrel/linux-virt-labs.git
    
  3. Change into the working directory.

    cd linux-virt-labs/olam
    
  4. Install the required collections.

    ansible-galaxy collection install -r requirements.yml
    
  5. Update the Oracle Linux instance configuration.

    cat << EOF | tee instances.yml > /dev/null
    compute_instances:
      1:
        instance_name: "ol-control-node"
        type: "control"
      2:
        instance_name: "ol-host"
        type: "remote"
    olam_type: none
    EOF
    
  6. Create an inventory file.

    cat << EOF | tee hosts > /dev/null
    localhost ansible_connection=local ansible_connection=local ansible_python_interpreter=/usr/bin/python3.6
    EOF
    
  7. Deploy the lab environment.

    ansible-playbook create_instance.yml -i hosts -e "@instances.yml"
    

    The free lab environment requires the extra variable ansible_python_interpreter for localhost because it installs the RPM package for the Oracle Cloud Infrastructure SDK for Python. The location for installing this package is under the system’s default Python modules based on your version of Oracle Linux. Using an inventory variable avoids impacting the plays running on hosts other than localhost.

    The default deployment shape uses the AMD CPU. You can change the shape of the instances by passing a new shape variable definition on the command line.

    For example: -e instance_shape="VM.Standard3.Flex"

    Similarly, the default version of the Oracle Linux image uses the variable os_version defined in the `default_vars.yml file. You can modify this value by passing the Oracle Linux major version on the command line.

    For example: -e os_version="9"

    Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Linux is complete, and the instances are ready. Note the previous play, which prints the public and private IP addresses of the nodes it deploys.

Setup Oracle Linux Automation Engine Control Node

The control node is the system for running the Oracle Linux Automation Engine playbooks. Running playbooks requires the installation of the Oracle Linux Automation Engine package.

  1. Set a variable equal to the control node’s IP address.

    export CONTROL="<ip_address_of_ol-control-node>"
    
  2. Open a terminal and copy the SSH key pair to the control node.

    scp -rp ~/.ssh/id_rsa* oracle@$CONTROL:~/.ssh/
    
  3. Set the permissions on the SSH private key.

    ssh oracle@$CONTROL "chmod 600 ~/.ssh/id_rsa"
    
  4. Connect to the ol-control-node system via SSH.

    ssh oracle@$CONTROL
    
  5. Install the Oracle Linux Automation Engine package and dependencies.

    sudo dnf install -y ansible-core
    

    The ansible-core package is available in the AppStream repository.

  6. Test the package installation.

    ansible --version
    

    Review the output and look for the default version of Python Oracle Linux Automation Manager. That is the environment where we must install the Oracle Cloud Infrastructure (OCI) SDK for Python.

    Note: If the output shows ERROR: Ansible requires the locale encoding to be UTF-8; Detected None., this indicates an incorrect locale setting for ansible. Fix the issue by setting these two environment variables:

    export LC_ALL="en_US.UTF-8"
    export LC_CTYPE="en_US.UTF-8"
    

Install Oracle Cloud Infrastructure SDK for Python

The OCI Dynamic Inventory plugin requires a working OCI SDK for Python configuration on the control node. We can install the OCI SDK using the Oracle Linux RPM or PIP, the package installer for Python.

  1. Install the OCI SDK for Python using PIP.

    1. Install the packages and dependencies for PIP.

      Oracle Linux 8:

      sudo dnf install -y python3.12-pip python3.12-setuptools
      

      Oracle Linux 9:

      sudo dnf install -y python3.9-pip python3.9-setuptools
      
    2. Install the Python packages

      Oracle Linux 8:

      /usr/bin/python3.12 -m pip install oci
      

      Oracle Linux 9:

      /usr/bin/python3.9 -m pip install oci
      

      Add the --proxy option if you are behind a proxy. Details are available in the help by running the command python3.12 -m pip help install.

  2. Test the OCI SDK for Python installation by printing its version.

    Oracle Linux 8:

    python3.12 -c "import oci;print(oci.__version__)"
    

    Oracle Linux 9:

    python3.9 -c "import oci;print(oci.__version__)"
    
  3. Create the OCI SDK default configuration directory.

    mkdir -p ~/.oci
    
  4. Create the SDK default configuration file

    The free lab provides a pre-generated SDK configuration, which we can copy to the ol-control-node system using scp.

    1. Open a new terminal from the desktop environment.

    2. Copy all of the SDK configuration files to the ol-control-node system.

      scp ~/.oci/* oracle@<ip_address_of_instance>:~/.oci/.
      
      exit
      

    If you’re following this tutorial outside of the free lab environment, see the instructions provided within the SDK and CLI Configuration File and Required Keys and OCIDs sections of the OCI Documentation to generate your OCI configuration file.

  5. Switch to the terminal window connected to the ol-control-node system.

  6. Update the location of the key_file in the SDK configuration file.

    When copying the SDK configuration file from the desktop environment, we must modify the user’s home directory portion of the key_file to ensure it matches the control system’s user name.

    sed -i 's/luna.user/oracle/g' ~/.oci/config
    
  7. Create a test Python script to verify the SDK is working.

    cat << EOF | tee test.py > /dev/null
    import oci
    object_storage_client = oci.object_storage.ObjectStorageClient(oci.config.from_file())
    result = object_storage_client.get_namespace()
    print("Current object storage namespace: {}".format(result.data))
    EOF
    

    The test.py script displays the Object Storage namespace for the configured OCI Tenancy and Compartment.

  8. Run the script

    Oracle Linux 8:

    python3.12 test.py
    

    Oracle Linux 9:

    python3.9 test.py
    

    The test script successfully prints the unique namespace of the configured tenancy.

Install the Oracle Cloud Infrastructure Ansible Collection

The OCI Ansible Collection contains a set of modules that automate cloud infrastructure provisioning and configuration, orchestrate complex operational processes, and deploy and update software assets.

  1. Create a project directory.

    mkdir ~/myproject
    
  2. Create a requirements file.

    cat << EOF | tee ~/myproject/requirements.yml > /dev/null
    ---
    collections:
      - name: oracle.oci
    EOF
    
  3. Install the OCI Ansible Collection.

    ansible-galaxy collection install -r ~/myproject/requirements.yml
    

    If you have installed a previous version, get the latest release by running the command with the --force option.

    ansible-galaxy collection install --force oracle.oci
    

Working with OCI Dynamic Inventory

Oracle includes its dynamic inventory plugin in the OCI Ansible Collection.

  1. Configure the inventory plugin by creating a YAML configuration source.

    The source filename needs to be <filename>.oci.yml or <filename>.oci.yaml. Where <filename> is a user-defined helpful identifier.

    cat << EOF | tee ~/myproject/myproject.oci.yml > /dev/null
    ---
    plugin: oracle.oci.oci
    
    # Optional fields to specify oci connection config:
    config_file: ~/.oci/config
    config_profile: DEFAULT
    EOF
    
  2. Test the inventory plugin by creating an inventory graph.

    ansible-inventory -i ~/myproject/myproject.oci.yml --graph
    

    The output shows a series of warnings and errors. So what went wrong?

    The error occurs because the plugin requires knowing the compartment OCID. If you provide the tenancy OCID rather than the compartment OCID and have the correct permissions, the plugin will generate an inventory for the entire tenancy.

    Since the plugin cannot read the compartment OCID information directly from the SDK configuration file, add it to the plugin configuration source file.

  3. Grab the compartment OCID from the SDK configuration file and assign it to the variable comp_ocid.

    comp_ocid=$(grep -i compartment ~/.oci/config | sed -e 's/compartment-id=//g')
    
  4. Append a compartment parameter to the plugin source file.

    cat << EOF | tee -a ~/myproject/myproject.oci.yml > /dev/null
    
    compartments:
      - compartment_ocid: "$comp_ocid"
        fetch_compute_hosts: true
    EOF
    

    Setting fetch_compute_hosts to true results in the inventory gathering information only on compute hosts and ignoring other instance types deployed within the compartment.

  5. Rerun the test.

    ansible-inventory -i ~/myproject/myproject.oci.yml --graph
    

    Our example shows the compute instances available within the compartment as a listing of inventory groups designated by the @ character and displays the instance’s public IP address.

    What if we wanted the private IP address?

    Grabbing the private IP address is necessary based on the physical location of the controller node or the configured network topology within the cloud infrastructure. Another reason for grabbing the private IP address is when the requested compute instances only have a private IP address.

  6. Change the plugin hostname format parameter by updating the plugin configuration source file.

    cat << EOF | tee -a ~/myproject/myproject.oci.yml > /dev/null
    
    hostname_format_preferences:
      - "private_ip"
      - "public_ip"
    EOF
    

    The example format above prioritizes a system’s private IP address over its public IP address. For more details on this configuration, see Hostname Format Preferences in the documentation.

  7. Retest the plugin.

    ansible-inventory -i ~/myproject/myproject.oci.yml --graph
    

    The output now displays the private IP address.

Run a Playbook

With the dynamic inventory setup and configured, we can use it to run a simple playbook. Ensure that you enable SSH access between your control nodes and any remote nodes.

  1. Create a playbook that pings the host.

    cat << EOF | tee ~/myproject/ping.yml > /dev/null
    ---
    - hosts: all,!$(hostname -i)
      tasks:
      - name: Ansible ping test
        ansible.builtin.ping:
    EOF
    

    Oracle Linux Automation Engine expects a comma-separated list of hosts or groups after the - hosts: entry, and the ! indicates that it should exclude those entries. The all entry will ping each host shown in the inventory as @all within the top-level group. You can modify this playbook to use a different group from the graph output by removing the @ character from its name and entering that name into the - hosts: entry.

  2. Run the playbook.

    ansible-playbook -u opc -i ~/myproject/myproject.oci.yml ~/myproject/ping.yml
    

    Accept the ECDSA key fingerprint when prompted.

    The -i option sets the dynamic inventory file used.

    The -u option sets the remote SSH user when attempting a connection.

Next Steps

Completing the playbook run with an ok status confirms that Oracle Linux Automation Engine successfully uses the OCI dynamic inventory to communicate with the remote instance it discovers within the compartment. Continue learning and use this feature to help manage your fleet of OCI instances and perform routine administration tasks on Oracle Linux.

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.