Skip to main content
RH294

Up and Running with Ansible Configuration Management

By August 23, 2019September 12th, 2022No Comments

In this blog we want to give you a 30,000 feet overview of Ansible  Configuration Management getting you up and running with and Ansible configuration file, inventory and Playbook. We will give little explanation at this stage as that will follow later. the goal if for you to gain some understanding of what Ansible can do and how it does it. First, as standard user we will run the ansible –version, not for the version, but to learn the configuration location. We assume y0u have Ansible installed. To install Ansible on CentOS use the EPEL repository and then you can install it with the package name ansible. In RHEL you need to enable the correct repo before installing.

$ ansible --version
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tux/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Jun 12 2019, 01:12:31) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]

We can see that the Ansible Configuration that is being used is the default, /etc/ansible/ansible.cfg. We will learn more about the configuration file later but it is usually best that we create our own in our current working directory. We will create a new directory for this purpose so we can group the configuration, inventory and Playbook together.

$ mkdir keys
$ cd keys
$ vim ansible.cfg #Add the following text to the new file
[defaults]
inventory = ./inventory
remote_user = root

$ ansible --version
ansible 2.8.3
config file = /home/tux/keys/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /bin/ansible
python version = 3.6.8 (default, Jun 12 2019, 01:12:31) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]

Creating an ansible.cfg file in the current directory ensures the configuration is used from that file, rather than the /etc/ansible/ansible.cfg file. We can see from running ansible –version again the path to the config file changes. We specify the inventory file, list of hosts we will manage, is specified to be the inventory file in the current directory. Creating a working directory with the ansible.cfg, inventory file and the Playbook is common and a simple way to have everything in the one place.


The inventory file is often just a INI style text file that defines hosts that we want to manage. We need to manage just two hosts for this demonstration. We see more on Ansible Inventory later in the course but for the moment we have a very simple file with just the one line. Make sure that in your file it correctly identifies the hosts that you want to manage. A range can be specified as [4:5] which includes both 4 and 5, the range [4:10] would include 4, 5, 6, 7, 8 9, and 10. The can use the *ansible* command to list the hosts using a special built-in group named all, from the output we can see the expansion of the range I added and both hosts are listing correctly. The inventory file can contain resolvable host names or IP Addresses.

$ cd keys
$ vim inventory #Add the following line to the new file
192.168.122.[4:5]

# ansible all --list-hosts
hosts (2):
192.168.122.4
192.168.122.5

We want to create a Playbook to deliver SSH keys to the managed hosts. These are brand new hosts with only the root account being present. We would normally want to disable SSH login for the root account but in RHEL it is enabled by default. This means that we can manage our initial setup as root before we create additional users on the managed nodes. We can authenticate with Passwords but SSH key-based authentication is more efficient and desired. Rather than generate the key on the Control Node and copy the id to each system with ssh-copy-id we will use Ansible to distribute the keys for authentication.

NOTE: Once the initial setup is complete and we have additional users on the remote nodes we will secure the Ansible system and managed nodes using unprivileged accounts and escalating privileges with sudo

When generating the key pair for tux, my standard user, we keep it simple and don’t specify a pass-phrase. This is simply for the ease of demonstration and, of course, in reality we would set the pass-phrase with something secure.

$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/tux/.ssh/id_rsa):
Created directory '/home/tux/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/tux/.ssh/id_rsa.
Your public key has been saved in /home/tux/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:e8FCXybXttf/E8lnQryzqzyXhkVQBggClur6PuEHIaI tux@repo-rhel8
The key's randomart image is:
+---[RSA 2048]----+
| oo. .. ..oo |
| .. . . .o |
| . . . +oo |
|o o . o = .+..|
|oo . S + oo.+|
|E + o . ==+|
| o o . . o *+|
|. o . . .o =..|
| ooo o=..o|
+----[SHA256]-----+

Now that we have the public key we can distribute the key using a Playbook. Before creating the Playbook a YAML file, we can configure the *vim* text editor to make formatting of YAML easier. We add this to the $HOME/.vimrc, appending the following line:

# vim $HOME/.vimrc
autocmd FileType yaml setlocal ai sw=2 ts=2 et

This sets the vim editor to enable auto-indenting, tab-stop and shift-width to be 2 spaces and tabs are saved as spaces in the file. Whitespace and indenting is important in YAML files and this helps ensure we get the indenting correct.

The final task is to create the Playbook. This will be contain a single play with a single task. Using the module “authorized_key” we can add the specified public key to the user we target on the remote hosts; allowing key based authentication from the local tux user to the root account on the managed nodes. Once we have created the Playbook, we can check that we have the syntax correct, always good to do.

$ cd $HOME/keys
$ vim keys.yml
---
- name: Add Keys to hosts
hosts: all
tasks:
- name: Install Key
authorized_key:
user: root
state: present
key: "{{ lookup('file', '/home/tux/.ssh/id_rsa.pub') }}"

$ ansible-playbook --syntax-check keys.yml

The YAML file starts with 3 dashes : . The one and only play has the name Add keys to Hosts, the play targets all host in the inventory file and then we have a list of tasks. In this case a single task that retrieves the specified key from the file and adds it the the root user’s $HOME/.ssh/authorized_keys file. We can execute it, also with the ansible-playbook command. We specify the option -k so that we use password based security as we do not have keys set up yet.

$ ansible-playbook -k all keys.yml
SSH password:

PLAY [Add Keys to hosts] **********************************************************************************************

TASK [Gathering Facts] ************************************************************************************************
ok: [192.168.122.5]
ok: [192.168.122.4]

TASK [Install Key] ****************************************************************************************************
changed: [192.168.122.5]
changed: [192.168.122.4]

PLAY RECAP ************************************************************************************************************
192.168.122.4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
192.168.122.5 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Looking at the output we can see the task changed the state on both hosts. This is good and the Playbook has worked. We can confirm tis by running the Playbook again this time without the option -k, so we will be using the keys we distributed. Running the Playbook again will also demonstrate the idempotent behaviour of Ansible. The task can be run many times but will only act if the desired state is not met.

$ ansible-playbook all keys.yml
PLAY [Add Keys to hosts] **********************************************************************************************

TASK [Gathering Facts] ************************************************************************************************
ok: [192.168.122.5]
ok: [192.168.122.4]

TASK [Install Key] ****************************************************************************************************
ok: [192.168.122.4]
ok: [192.168.122.5]

PLAY RECAP ************************************************************************************************************
192.168.122.4 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
192.168.122.5 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

This time we do not have the SSH Password prompt, so we authenticate using the keys. As the key is there we do not need to change the hosts and we see that both managed nodes are in the state OK