Skip to main content
RH294

Provisioning Vagrant with Ansible

By December 22, 2020September 12th, 2022No Comments

Provisioning Virtual Machines Using Vagrant and Ansible is a lot easier than you think especially if you already have some Ansile experience. Within you own environment you may well be used to using Vagrant from HashiCorp to deploy Virtual Machines. Vagrant is free to use and install on Linux, Windows and MacOS. The great thing with Vagrant is that you can further configure your Virtual Machine using Provisioners, one of those provisioners is Ansible. The Ansible controller can be installed on Linux and MacOS, so if your Vagrant host is one of those operating systems, you are ready to both deploy your Virtual Machine with Vagrant and provision it with your Ansible Playbooks. Using Ansible 2.10.x installed on MacOS I will demonstrate how you can use Vagrant and Ansible in collaboration.

Provisioning Virtual Machines Using Vagrant Alone

We start by looking at provisioning Vagrant Virtual Machines with Ansible, but just Vagrant to start. In simple terms Vagrant is a Virtual Machine Manager and requires a Virtualization Engine. Typically, this would be VirtualBox as it it free on all platforms. We will use VirtualBox and Vagrant on MacOS.
– Download and install VirtualBox from https://www.virtualbox.org/wiki/Downloads
– Download and install Vagrant from https://www.vagrantup.com/downloads.html
Having downloaded and installed both products we ignore the normal usage of VirtualBox; remember that Vagrant is your Virtual Machine Manager so although we need VirtualBox installed we do not need to open or use their interface. Create a project directory for you Virtual Machine project, this can also be our Ansible project directory.
Listing 01: Create directory for Vagrant project

$ mkdir -p $HOME/vagrant/ubuntu
$ cd $HOME/vagrant/ubuntu

The control file for Vagrant is called the Vagrantfile and this defines the Virtual Machines. The Virtual Machines are cloned from, boxes, these boxes are downloaded from the Vagrant repository located at: https://app.vagrantup.com/boxes/search . They only need to be downloaded once and future VMs can be cloned from the downloaded box. First we look at a simple Vagrantfile defining one Virtual Machine before moving onto defining the Ansible provisioning element.
Listing 02: A Simple VagrantFile Defining a single VM

$ cd $HOME/vagrant/ubuntu
$ vim Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.hostname = "bionic"
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
end
end

We use two code blocks in this example. The outside block defines the configurations and uses a variable called config to configure elements:
– config.vm.box defines the Virtual Machine image (box) that we want to download
– config.vm.hostname defines the hostname of the new VM

The inner block defines specific setting for VirtualBox, as an example we increase the virtual RAM assigned to the VM, the variable used in tis block we have set to vb.
To start the Virtual Machine we use vagrant up, executed from the directory $HOME/vagrant/ubuntu. We can connect using the command vagrant ssh from the same directory:
Listing 03: Starting the VM and connecting

$ cd $HOME/vagrant/ubuntu
$ vagrant up
...
$ vagrant ssh
vagrant@bionic:~$ free -m
total used free shared buff/cache available
Mem: 1992 82 1656 0 254 1767
Swap: 0 0 0
vagrant@bionic:~$ exit
logout
Connection to 127.0.0.1 closed.
$

The default networking is using the VirtualBox NAT network so be connect to the localhost on port 2222 to be redirected to SSH on the running VM. This is transparent to us when making the connection via the vagrant ssh command. We also connect as the user vagrant using key based authentication, again this is transparent to us from the command line. To shut the VM down we can use vagrant halt. This keeps the Virtual Machine and can be started again at anytime using vagrant up. There will be no need to clone the box again if the VM already exists so any configuration changes are maintained as you would expect with normal VM usage. The delete the Virtual Machine we can use vagrant destroy. Using the -f option we will not be prompted to destroy the VM. We will now destroy the VM before look at Ansible integration.
Listing 04: Deleting the VM

$ vagrant destroy
default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Forcing shutdown of VM...
==> default: Destroying VM and associated drives...

Ansible Integration with Vagrant to Provision Virtual Machines

We can now move onto the full picture: Provisioning Vagrant Virtual Machines with Ansible on Linux or MacOS. The automation of your Vagrant Virtual Machine provisioning could not get any better than by using Ansible. You will need the Ansible controller engine installed on your VirtualBox host machine meaning that you are limited to Linux or MacOS. But if you can meet those requirements you will benefit by using Ansible and address some of the issues presented by Vagrant.
– The default network connectivity is via the NAT network. This means that other Virtual Machines and even the host system does not have direct access to the Virtual Machine, access is via port forwarding from the host system. If you have many virtual machines and many services this could become quite messy
– Password based authentication is disabled by default, this makes the system more secure but may not be what you want for your test lab systems
– The vagrant user account is the user you should use to connect to each VM, again a specific named account may be more useful. I always use the tux user account in my labs

Network Connectivity

We can resolve network connectivity easily and without the need of Ansible, this is just a Vagrantfile setting. We can configure this first before moving onto Ansible. We will assume that the 192.168.50.0/24 network is unused on your system and the setting will create a new host-only network in VirtualBox connecting the VM to both the NAT network and the host-only network. All VMs defined with an IP on the same network will be able to communicate with each other as well as the host. In the following demonstration we edit the existing Vagrantfile adding in the new network and address. We start the virtual machine before connecting and list the IP addresses. We see the address of 192.168.56.100 and we are able to ping the VirtualBox host on the IP 192.168.50.1. We end by deleting the VM.
Listing 05: Adding a host-only network to the Vagrantfile

$ cd $HOME/vagrant/ubuntu
$ vim Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.network "private_network", ip: "192.168.50.100"
config.vm.hostname = "bionic"
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
end
end
$ vagrant up
$ vagrant ssh
vagrant@bionic:~$ ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 86368sec preferred_lft 86368sec
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 192.168.50.100/24 brd 192.168.50.255 scope global enp0s8
valid_lft forever preferred_lft forever
vagrant@bionic:~$ ping -c 1 192.168.50.1
PING 192.168.50.1 (192.168.50.1) 56(84) bytes of data.
64 bytes from 192.168.50.1: icmp_seq=1 ttl=64 time=0.194 ms

--- 192.168.50.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms
vagrant@bionic:~$ exit
logout
Connection to 127.0.0.1 closed.
$ vagrant destroy -f
==> default: Forcing shutdown of VM...
==> default: Destroying VM and associated drives...

Adding Your User Account

We know that Ansible can deploy user accounts and of course this applies when provisioning Vagrant with Ansible. We will now start working with an Ansible Playbook, a YAML file, so that we can define our own specified user account. We will set a password but password authentication is not enabled for SSH just yet. Within the vagrant project directory we will create the Playbook which we will name simply, ubuntu.yml. We will add the user account now but we will user the same Playbook to define other entries as we work though the demonstration.
Listing 05: Adding the personalized user account in an Ansible Playbook

$ cd $HOME/vagrant/ubuntu
$ vim ubuntu.yml
---
- name: test
hosts: all
gather_facts: true
become: true
tasks:
- name: Create tux user
user:
name: tux
state: present
password: "{{ 'Password1' | password_hash('sha512') }}"
update_password: on_create
shell: /bin/bash

Allowing SSH Password Authentication

We have set a password for our user but in Vagrant boxes, password authentication is disabled for SSH. The default vagrant account uses key based authentication which is more secure than using passwords. We though, may prefer passwords for out own lab systems, in which case, we must edit the SSHD configuration. The configuration change will require a reboot of the SSHD service which we will implement with a handler in Ansible. A handler is only executed when it is notified by a task. Editing the SSH config file can then force the restart of the service.
Listing 06: Allowing Password Based Authentication

$ cd $HOME/vagrant/ubuntu
$ vim ubuntu.yml
---
- name: test
hosts: all
gather_facts: true
become: true
handlers:
- name: restart_sshd
service:
name: sshd
state: restarted
tasks:
- name: Create tux user
user:
name: tux
state: present
password: "{{ 'Password1' | password_hash('sha512') }}"
update_password: on_create
shell: /bin/bash
- name: Edit SSHD Config
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PasswordAuthentication '
insertafter: '#PasswordAuthentication'
line: 'PasswordAuthentication yes'
notify: restart_sshd

Sudo Rights For The New User

The new user we have created may also need to be the account that we use, as such we will want full admin rights via sudo. We can manage this using Ansible and the copy module, adding a file to the /etc/sudoers.d/ directory. Make sure that you correctly reference your user account, we are using tux.
Listing 07: Assigning sudo rights to new user

$ cd $HOME/vagrant/ubuntu
$ vim ubuntu.yml
---
- name: test
hosts: all
gather_facts: true
become: true
handlers:
- name: restart_sshd
service:
name: sshd
state: restarted
tasks:
- name: install
package:
state: latest
name:
- bash-completion
- vim
- nano
- name: Create tux user
user:
name: tux
state: present
password: "{{ 'Password1' | password_hash('sha512') }}"
update_password: on_create
shell: /bin/bash
- name: Edit SSHD Config
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PasswordAuthentication '
insertafter: '#PasswordAuthentication'
line: 'PasswordAuthentication yes'
notify: restart_sshd
- name: Add sudo rights for tux
copy:
dest: /etc/sudoers.d/tux
content: "tux ALL=(root) NOPASSWD: ALL"
backup: true

Provisioning with Ansible Playbooks

We can now bring all of this together, within the Vagrantfile we need to reference the Playbook that we have created. We can then bring the Virtual Machine up and we will have the desired configuration. We will be able to login to the host-only IP Address of the VM using the tux account and password we set. This user should have full sudo rights.
Listing 08: Reference the Playbook from the Vagrantfile

$ cd $HOME/vagrant/ubuntu
$ vim Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.network "private_network", ip: "192.168.50.100"
config.vm.provision "ansible", playbook: "ubuntu.yml"
config.vm.hostname = "bionic"
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
end
end
$ ansible-playbook ubuntu.yml --syntax-check
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

playbook: ubuntu.yml
$ vagrant up
$ ssh tux@192.168.50.100
tux@bionic:~$ sudo -l
Matching Defaults entries for tux on bionic:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin

User tux may run the following commands on bionic:
(root) NOPASSWD: ALL
tux@bionic:~$ exit
logout
Connection to 192.168.50.100 closed.
$ vagrant destroy -f

We can now see the power provisioning Vagrant Virtual Machines with Ansible on Linux or MacOS. Ansible is very powerful and can deliver fully configured systems in minutes. Do take note though, when deleting the VM and recreating it later you will have an old copy of of the servers SSH Public Keys. We can delete them from the host systems known_hosts file:
Listing 09: Cleaning Old Known Hosts

$ ssh-keygen -R 192.168.50.100
# Host 192.168.50.100 found: line 137
/Users/andrew/.ssh/known_hosts updated.
Original contents retained as /Users/andrew/.ssh/known_hosts.old