7.2 "Command Failed" Error When Starting the Ansible Container

When you perform a deployment, you might experience a Command failed error when starting the Ansible container, for example:

TASK [common : Starting Ansible container]
*************************************
fatal: [ol7-c4]: FAILED! => {"changed": false, "failed": true, "msg":
"ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),)"}

PLAY RECAP
*********************************************************************
ol7-c4                     : ok=3    changed=0    unreachable=0    failed=1
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/kollacli/commands/deploy.py", line66, in take_action
    verbose_level)
  File "/usr/lib/python2.7/site-packages/kollacli/common/ansible/actions.py",line 89, in deploy
    playbook.run()
  File"/usr/lib/python2.7/site-packages/kollacli/common/ansible/playbook.py", line139, in run
    raise e
CommandError: ERROR: Command failed. :

This occurs when the Docker Engine is not running on a target node. To resolve this issue:

  1. Ensure that the Docker Engine is running on all target nodes.

    To check that the Docker Engine is running:

    $ systemctl status docker.service
    ● docker.service - Docker Application Container Engine
       Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
      Drop-In: /etc/systemd/system/docker.service.d
               └─docker-sysconfig.conf, var-lib-docker-mount.conf
       Active: inactive (dead) since Tue 2016-03-29 13:20:53 BST; 2min 35s ago
    ...

    If the output of this command shows the status of the Docker service to be inactive (dead), start the Docker Engine:

    # systemctl start docker.service
  2. From the master node, deploy OpenStack services to the target nodes:

    $ kollacli deploy