Ansible inventory management for AWS EC2 on a Small Scale


At TRI, we do everything the hard way… on a small scale. Many online services “best practices” and offerings are fine when you have a large operating budget and staff, and that leaves “smaller” shops with some notable problems to solve.

Until recently, we ran a bunch of AWS EC2 instances using Ansible, but Ansible Tower and various “dynamic inventory” schemes did not meet our realtively-small-scale needs. We don’t have hundreds or thousands of anything. What we wanted was a simply way to use a human-friendly naming convention for our hosts (ex: encoding-inspector-server-20200629-2) but still avoid having manually update rapidly-changing server IP addresses required for Ansible management.

Overview

We solved the problem like this:

  1. Create EC2 instances based on our local Ansible inventory file
  2. Query AWS for EC2 IP addresses and create an alternate SSH configuration file, mapping our local hostnames to pubic IP addresses
  3. Finally, configure Ansible to use the alternate SSH configuration file

Now we can use our inventory hostnames without having to update the IP addresses manually.

Implementation

We create new EC2 instances like this, using parameters from group_vars, hosts.yml, etc. This step ensures we’ve got the right number of instances running, and each is named with our internal inventory name.

---

- name: "Create ec2 instance(s) for {{ target_hosts | default('all') }}"
  hosts: "{{ target_hosts | default('all') }}"
  gather_facts: false
  tasks:

    - name: "Ensure ec2 instance for {{ inventory_hostname }}"
      ec2:
        key_name: "{{ aws_ec2_key_name }}"
        region: "{{ aws_ec2_region }}"
        image: "{{ aws_ec2_image }}"
        wait: "{{ aws_ec2_wait }}"
        vpc_subnet_id: "{{ aws_ec2_vpc_subnet_id }}"
        group: "{{ aws_ec2_group }}"
        instance_type: "{{ aws_ec2_instance_type }}"
        exact_count: 1
        count_tag:
          Name: "{{ inventory_hostname }}"
        instance_tags:
          Name: "{{ inventory_hostname }}"
        assign_public_ip: yes
        termination_protection: "{{ aws_ec2_termination_protection }}"
      delegate_to: 127.0.0.1
      register: ec2_results

This playbook ensures one AWS EC2 instance for each inventory hostname in our local inventory file. Key elements are:

  • instance_tags: / Name: "{{ inventory_hostname }} – Setting the Name tag to the local inventory name is the key to this solution.
  • exact_count: 1 – indicates that only one instance is desired that matches the “count_tag”
  • count_tag: / Name: "{{ inventory_hostname }}" – indicates the thing to count is the value associated with the Name: tag (which is, again, our local inventory name.)

After the EC2 instances are created (or terminated, for that matter,) we run the update_ec2_ssh_config.yml playbook to update the ~/.ssh/ec2_config file.

---
---

- name: Create ec2_config based on live ec2 instances
  hosts: all
  gather_facts: False

  tasks:

    - ec2_instance_info:
        aws_access_key: "{{ aws_access_key_id }}"
        aws_secret_key: "{{ aws_secret_access_key }}"
        region: "{{ item }}"
      register: ec2_instance_info
      delegate_to: localhost
      run_once: True
      with_items: "{{ aws_regions }}"

    - name: "Write ansible_ssh_config"
      template:
        src: templates/ec2_ssh_config.j2
        dest: "~/.ssh//ec2_config"
      vars:
        instances_by_region: "{{ ec2_instance_info.results }}"
        user: "{{ aws_ec2_user }}"
        identity_file: "{{ secure_keys_dir }}/{{ aws_ec2_key_name }}"
      delegate_to: localhost
      run_once: True
      changed_when: False # or else will imply all the instances are changed

Things to note:

  • ec2_instance_info module can only get info for one region at a time, so you must run it for each region where you have instances.
  • Vars for the template include the EC2 instance info, a user, and an identity file. All our instances use the same Ansible user (“ubuntu”) and the same ssh identity file to connect, so we do not provide separate values for each EC2 instance. You can see how this rolls out in the ssh config file template.

Here’s the template.

  • Host entries are arranged by region to match the EC2 info data we’re passing.
  • The “Host *” section applies the common user and identify_file value to all the hosts where it has not been set explicitly (i.e., all of them.)
  • Just for convenience, there’s a comment added for any inventory hostname where there’s no running EC2 instance.
  • The CheckHostIP no setting stops SSH from checking the known_hosts file against the key provided by the EC2 instance and screaming bloody murder every time we replace a machine (which has a new key.)
# {{ ansible_managed }}
# alternate ssh config file for ec2 instances

{% for region in instances_by_region %}
# Region {{ region.item }}

{% for instance in region.instances %}
{% if instance.tags.Name is defined %}
{% if instance.public_ip_address is defined %}
Host {{ instance.tags.Name }}
  Hostname  {{ instance.public_ip_address }}
{% else  %}
# Host {{ instance.tags.Name }}
#  Hostname  (no public ip address)
{% endif %}
{% endif %}

{% endfor %}
{% endfor %}

Host *
  User {{ user }}
  IdentityFile {{ identity_file }}
  CheckHostIP no

# end

With the new ec2_ssh_config file in place, all that’s left is to tell Ansible to use this config file instead of the default one for SSH.

...
#ssh_args = -C -o ControlMaster=auto  -o ControlPersist=300s
ssh_args = -F /Users/tomwilson/.ssh/ec2_config -o ControlMaster=auto -o ControlPersist=60s
...

The only option added is the “-F” option specifying the new SSH options file.

That’s it.

This solution lets us automatically create and manage AWS EC2 instances from a local inventory file, all without managing ever-changing IP addresses manually.

Bonus round

Now that there’s a new SSH options file, it’s possible to use this to connect to the EC2 instances from the command line, like this:

ssh my_inventory_hostname -F ~/.ssh/ec2_config

And for convenience I’ve added this executable file (note: on my Mac… this is certainly not how you would do this for Windows.)

#!/bin/bash
ssh $1 -F ~/.ssh/ec2_config "${@:2}"

And now all I have to remember (and type) is this:

sshec2 my_inventory_hostname

, , ,