Go to main content
Oracle® VM Server for SPARC OpenStack Nova Driver and Utilities 1.0 Administration Guide

Exit Print View

Updated: May 2017
 
 

Known Issues

Using an “All-in-One” OpenStack Configuration

An OpenStack “all-in-one” configuration is a topology where the cloud controller and all other logical nodes, such as nova compute nodes, co-exist in the same control domain. You might encounter some problems when using such a topology.


Caution

Caution  - Do not use an all-in-one configuration for important or critical environments, such as production environments.


    If you must set up an all-in-one configuration, consider and address the following issues that might affect your environment:

  • Resolve the port conflicts between the Oracle VM Server for SPARC virtual console concentrator (VCC) service and the OpenStack keystone service.

      An all-in-one configuration might cause a port conflict at one of the following times:

    • Upon deployment of the first guest domain on this machine

    • Immediately if a guest domain has already been deployed

    This conflict might occur because both the OpenStack keystone service and the Oracle VM Server for SPARC VCC service use port 5000 by default.

    To work around this conflict, specify a VCC port range that starts above port 5000 when you configure the control domain:

    # ldm set-vcc port-range=5001-5100 vcc-name

    Note - To determine the VCC name, run the ldm list-services command.
  • Install and set up the all-in-one configuration in the following order to avoid common problems such as accidentally overwriting OpenStack configuration files.

    1. Install the openstack-controller-ldoms package.

    2. Install the nova-ldoms package.

    3. Run the /opt/openstack-ldoms/bin/demo-controller-setup.sh script.

    4. Run the /opt/openstack-ldoms/bin/setup.sh script.

    5. Customize the configuration, as needed.

    6. Reboot the system.

    7. Ensure that all enabled OpenStack services start correctly.

Cannot Type in the Console Window for a VM

There is an OpenStack console-focus issue, not specific to the Oracle VM Server for SPARC OpenStack nova driver.

To address this issue, click the blue bar at the top of the console window.

Cannot Deploy EFI Images to Older Hardware

Some older servers (such as UltraSPARC T2 servers) do not support EFI labels. As such, you must create a VTOC based on VM images to support old and new hardware. This issue also imposes disk size limitations.

Cannot Set cpu-arch Property Value After Deployment

If the cpu-arch property is set on a VM, the nova driver cannot change the cpu-arch property value later. This issue occurs because flavor migration is not yet supported by the Oracle VM Server for SPARC OpenStack Nova driver.

Oracle Solaris 10 Guest Domains: Automated Disk Expansion Is Supported Only With the ZFS Root

You must use a ZFS root to use the automated disk expansion capability. File system and volume managers such as UFS, SVM, and VxFS are not supported by this capability.

Linux for SPARC Does Not Support All Oracle VM Server for SPARC Features

    The following Oracle VM Server for SPARC features do not work with a guest domain that is running Linux for SPARC 1.0:

  • Dynamic volume attachment and detachment

  • Dynamic network attachment and detachment

  • Live migration

Console Logs Are Not available After a Live Migration

The vntsd console log is not migrated with the guest domain. As a result, these console logs are no longer available and only recent log entries appear.

Mismatched MTUs on the Management Network Can Be Problematic

You might experience problems with the message queue or other OpenStack services if your controller and compute nodes have mismatched MTUs on their management interfaces. These management interfaces are used for OpenStack management communications. A mismatched MTU configuration might have a compute node management network of 9000 bytes and controller node of 1500 bytes. Ensure that all hosts are aligned on their management network in terms of the MTU.

Avoid Inline Comments in OpenStack Configuration Files

You might experience problems if you add a comment (# to the end of a configuration line in an OpenStack configuration file. OpenStack interprets inline comments as part of the value.

Ensure that you place comments on lines by themselves and that the comment line begins with a comment symbol (#).

For example, the admin_password=welcome1 #my password configuration line is interpreted as specifying the password as welcome1 #my password.

Use the following line to check a configuration file for inline comments:

# cat /etc/service/service.conf |egrep -v '^#' | grep '#'

nova-compute Service Hangs at Mounting NFS share Stage

Ensure that the NFS server settings are correct. If you choose the wrong server, the nova-compute service appears to hang at boot while attempting to mount the NFS share.

To work around this problem, disable the nova-compute service and issue a kill to the mount that is being attempted to the wrong share. The driver might make an additional attempt to mount the share, so ensure that all attempts the driver makes to mount the wrong share are killed after the nova-compute service is disabled. Then, correct the nova.conf file and enable the nova-compute service.

“Rebuild” Does Not Actually Rebuild the VM

The rebuild operation is not yet supported by the Oracle VM Server for SPARC OpenStack Nova driver. Only the Nova evacuate operation is supported. If a user attempts to perform a rebuild operation, the VM's existing disk might be recycled and not “re-imaged.”

The nova-compute Service Times Out Waiting For Cinder to Create a LUN When You Run create new volume

If a Cinder volume is being created with an OS image on it, the OS image copy might take a long. Nova can time out waiting for Cinder to complete its task. The nova-compute service (outside of the Oracle VM Server for SPARC Nova driver) simply polls for a period of time and waits to see whether Cinder created the volume.

Consider increasing the following value if you experience these “hangs” in your environment:

block_device_allocate_retries=360

Then, restart the nova-compute service.

Compute Node Panics Because of DLM Fencing

If you experience problems accessing the NFSv4 share, a compute node might panic. If the NFSv4 share becomes unavailable, lags, or has other connectivity issues for ten minutes or more, the compute nodes fence themselves off by issuing a panic to the control domain. If you experience this problem frequently, disable DLM by commenting out the dlm_nfs_server entry while you identify the root cause of the issue.

Ensure that your NFSv4 storage is highly available and resilient. Also ensure that delegation is disabled.

After Installing the Controller Package, neutron-server Service Goes In To Maintenance Mode

This problem arises if the neutron-server service is configured for EVS rather than from ML2 and if the profile attempts to bring the neutron-server service online before it is properly configured.

To correct this issue, restart the manifest-import service and disable the neutron-server service by running these commands:

cctrl# svcadm restart manifest-import
cctrl# svcadm disable neutron-server

If you are configuring your cloud controller services manually, you must complete configuring your /etc/neutron/neutron.conf and /etc/neutron/api-paste.ini cloud controller files before you re-enable the neutron-server service.

vfs_mountroot Panic Occurs on the First Boot of a Guest Domain

If a vfs_mountroot panic occurs on the first boot of a guest domain, see Golden Image Limitations.

Serial Console Immediately Closes Its Connection

The serial console validates that the URL in the browser matches the configuration. If the serial console closes the connection immediately, you might need to change the base_url property value to match the host name of the controller node that you use to access the console. This name is likely to be the domain name of the system or another front end such as a load balancer or reverse proxy.

The base_url property is specified on compute nodes in the /etc/nova/nova.conf.

The following example nova.conf file shows the base_url property changed from an IP address to a host name that matches the controller name, cloud-controller-hostname:

[serial_console]
enabled=true
#base_url=ws://10.0.68.51:6083/
base_url=ws://cloud-controller-hostname:6083/
listen=$my_ip
proxyclient_address=$my_ip
serialproxy_host=10.0.68.51

After you make changes to the nova.conf file, restart the nova-compute service.

nova# svcadm restart nova-compute