Configuring your infrastructure using our AWS Ansible Collection is simple. Join us in pressing Enter to create your own infrastructure!
Installation
To start, make sure you have Ansible 2.9 or later installed on your system.
Then, if you’re on RHEL or CentOS, the easiest option is installing via yum
or dnf
.
We also provide an ansible-galaxy
package if that’s what you prefer.
Download instructions for the collection are available here.
To make sure everything was installed properly, run
ansible-doc steampunk.aws.ec2_instance
which will print out the documentation to our module for controlling EC2 instances.
Playbook
Make sure you have Steampunk AWS Ansible Collection version 0.8.2 or above installed.
Let’s release some steam right away with this simple playbook to show you just how intuitive it is to begin configuring your AWS resources.
- hosts: localhost
tasks:
- name: Configure the steamy VPC
steampunk.aws.ec2_vpc:
name: steamy-vpc
cidr: 10.0.0.0/16
register: vpc
- name: Configure the steamy subnet
steampunk.aws.ec2_subnet:
name: steamy-subnet
vpc: "{{ vpc.object.id }}"
cidr: 10.0.0.0/24
register: subnet
- name: Configure the steam keypair
steampunk.aws.ec2_key_pair:
name: steam-pair
register: key_pair
- name: Store the keypair if it was generated
copy:
dest: /tmp/steam-pair.key
content: "{{ key_pair.object.key_material }}"
when: key_pair is changed
- name: Configure the steam engine
steampunk.aws.ec2_instance:
name: steam-engine
type: t3.micro
ami: ami-0b7937aeb16a7eb94 # ubuntu 18.04
key_pair: "{{ key_pair.object.name }}"
subnet: "{{ subnet.object.id }}"
You may have noticed we explicitly specify which collection’s modules we’re using in the playbook - steampunk.aws
.
These are called FQCNs: fully qualified collection names.
With Ansible’s migration to collections, we always need to specify where our modules are coming from if we’re using
modules not in the default built-in collection.
Now let’s take a look at what the above playbook does and how it’s constructed. Playbooks constructed using our content are meant to serve as readable documentation, so there is little need to duplicate the configuration in a separate documentation store, and there can be only one definitive source of truth.
The first lines specify where and how the tasks will run - on the local machine. After that, we jump straight into configuring our new cloud infrastructure. We want this experiment to be separated, so we create a new VPC to host our resources.
We use the EC2 subnet information module to get the subnets in the VPC we have just created to later create an instance in one of them.
As with all cloud services, we need to generate a key pair to access the machine. The key is also saved, as it is not stored by AWS and only accessible when first generated.
Then we finally launch an instance with a minimal set of parameters.
Provisioning the steam stack
Run the above playbook using ansible-playbook playbook.yml
.
You also need to specify your AWS credentials securely through, for example, environment variables:
export AWS_SECRET_KEY=fill-me-in
export AWS_ACCESS_KEY=fill-me-in
export AWS_REGION=eu-north-1
ansible-playbook playbook.yml
If your organisation’s security scheme has a jump host that injects credentials, you can even execute it on that server. This is frequently done through environment variables, but there are more possibilities for this, depending on your organisation’s requirements. More on that in a separate blog post.
When the execution finished, this is what the output should look like:
PLAY [localhost] ********************************************************************
TASK [Gathering Facts] **************************************************************
ok: [localhost]
TASK [Configure the steamy VPC] *****************************************************
changed: [localhost]
TASK [Configure the steamy subnet] **************************************************
changed: [localhost]
TASK [Configure the steam keypair] **************************************************
changed: [localhost]
TASK [Store the keypair if it was generated] ****************************************
changed: [localhost]
TASK [Configure the steam engine] ***************************************************
changed: [localhost]
PLAY RECAP **************************************************************************
localhost : ok=6 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
For each of the above tasks, we get one item in the output, telling us that a resource has been changed.
If we’d run this again, our modules will not create another set of resources - they’re smart enough to identify
existing ones and report that nothing has changed!
If you’re bothered by the fact that Ansible says that the changes happened on localhost
, fret not!
This just means that your local machine was the one executing the AWS API calls - you can use your jump host if
you like.
Let’s now see what would happen if we made a typo in the playbook.
For example, change the instance type from t3.micro
to t0.micro
.
Our modules do extensive checks before modifying anything to make sure they don’t leave you hanging in some kind of
undefined state.
You can be sure that you either have exactly what you wanted, or a note of what went wrong.
TASK [Configure the steam engine] ***************************************************
fatal: [localhost]: FAILED! =>
{"changed": false, "msg": "Instance type t0.micro does not exist in your region"}
As you can see, there is a clear error message for what you need to recheck before you next run the playbook.
Run this playbook as many times as you like, and even play around with updating parameters across runs.
For advanced users, see how the --diff
and --check
command line switches can help ensure consistency across your
organisation.
Stay tuned for more
In future posts, we’ll explore how idempotence, diff and check modes can solve your config drift woes and the numerous ways of authenticating with AWS.
Try out the Steampunk AWS Ansible Collection today by contacting us here!
You can also follow us on Twitter, LinkedIn, and Reddit. Until next time!