This post was originally published on the
XLAB Steampunk blog.
At the beginning of July 2020, we started using the NGINX Unit application server in one of our projects. Because we are not barbarians, we wanted to use Ansible to automate the deployment. And this is where the “fun” started.
In this post, we will give a short backstory about our NGINX Unit adoption, continue with the reasons behind our decision to create a complete Ansible Collection instead of a simple Ansible role, and conclude with a short development process overview.
In June of 2020, we started migrating our project from CentOS 7 to CentOS 8. All went well until we realized that getting our preferred backend setup (NGINX + uWSGI + Django/Flask) on CentOS 8 is missing a critical piece because uWSGI is not part of the Extra Packages for Enterprise Linux (EPEL) repository anymore.
In this situation, we had three choices:
- We could find a 3rd-party uWSGI package for CentOS 8.
- We could install uWSGI manually using
pipand write the systemd service files ourselves.
- We could replace uWSGI with another application server that has packages for CentOS 8.
After quite a lot of searching, comparing the pros and cons, and pondering our life decisions, we decided to go with the last option. And because we were already using NGINX as our HTTP server, NGINX Unit was a natural choice for an application server.
With the application server selected, we focused our attention on updating our deployment playbooks. But before we could do that, we had to create Ansible content that we needed.
The first thing we wanted to do is, of course, install NGINX Unit and its Python language module. Because Ansible already has modules for adding YUM repositories and installing RPM packages, writing an Ansible role for NGINX Unit installation was a natural choice. But configuration was a whole different matter.
In the good old days, a simple configuration template combined with the template Ansible module was all we needed to configure an application. But we are living in different times now where cloud- and container-native applications are eating the world. Such applications usually communicate using the web Application Programming Interfaces (APIs), so it should come as no surprise that the configuration of those applications also moved from file-based to API-based processes.
And while Ansible does have a general-purpose module for interacting with web APIs, using it for anything that requires more than a single API call becomes awkward. And enforcing the desired state (something that high-quality Ansible modules should do whenever possible) is next to impossible without making a mess out of our Ansible playbooks.
So, we bit the bullet and decided to write a set of modules that will allow us to keep our Ansible playbooks readable and make our deployment process safe and robust. For starters, we opted to create three Ansible module pairs that covered the functionality we needed: setting up Python applications, routes, and listeners.
Why only three? Because delivering early and often is what our brain needs in order not to get bored and bogged down with irrelevant details ;)
We started our development process by writing down a fictional Ansible playbook for setting up the NGINX Unit. And this is what we came up with:
--- - name: Install and run NGINX Unit hosts: unit_hosts become: true tasks: - name: Install Unit include_role: name: steampunk.unit.install - name: Create a directory for our application file: path: /var/www state: directory - name: Copy application copy: src: files/wsgi.py dest: /var/www/wsgi.py mode: "644" - name: Add application config to Unit steampunk.unit.python_app: name: sample module: wsgi path: /var/www - name: Expose application via port 3000 steampunk.unit.listener: pattern: "*:3000" pass: applications/sample
Why did we start with the playbook? Because we wanted to make sure our user-facing interface does not suck ;)
Next, we developed the installation role. steampunk.unit.install Ansible role is slightly different from its Ansible Galaxy siblings in a sense that it does a bit less than other comparable Ansible roles. For example, it does not alter the SELinux configuration because hiding such system modification deep inside an Ansible role is not something we would be comfortable doing. Instead of promoting an “it just magically works” approach, we prefer placing this kind of information into our documentation and educating users.
We also made sure our installation role is modular, which is something Ansible
content developers often neglect. We can use the steampunk.unit.install
Ansible role to add YUM repositories, install RPM packages, and start the
service independently. By default, including the steampunk.unit.install
Ansible role will perform all three things one after another, but we can limit
the “scope” of work using the
- name: Setup NGINX Unit repositories include_role: name: steampunk.unit.install tasks_from: repositories.yml
Once we had our installation Ansible role ready, we started developing Ansible modules. We did our best to mirror the upstream’s API in our Ansible modules because that makes it easier to cross-reference the official documentation.
We wrote each module in five stages. We:
- wrote the module’s documentation,
- prepared argument specification and any extra validation logic,
- made sure parameter validation works using the unit and integration tests,
- added business logic to the module, and
- validated the business logic using integration tests.
And while this is a pretty standard development process, we did make our lives a bit easier by automating the second step. The argument specification generator tool was born because we were too lazy to duplicate the documentation section’s information. You can find more information about this tool in a dedicated blog post.
The more observant of you probably noticed that our quickstart Ansible playbook does not use the steampunk.unit.route module. So why did we develop it? Because we were not just mechanically wrapping the NGINX Unit’s API, we were also adding change detection and parameter validation Ansible users rightfully expect. And to properly test the steampunk.unit.listener module using integration tests, we needed a steampunk.unit.route module.
And yes, we thoroughly covered our NGINX Unit Ansible Collection with tests. We made sure sanity tests are green on Ansible versions 2.9 and 2.10. Our unit tests pass on a wide variety of Python versions, and our installation role works on supported distributions.
And we were nearly done. But we were still missing one vital part of any Ansible Collection: the documentation.
Just like we need a competent and honest marketing team to sell our product to the customers, we need high-quality documentation to “sell” our Ansible Collection to developers and administrators. And there are at least three different kinds of documentation each Ansible Collection should contain, and we made sure our NGINX Unit Ansible Collection has all of them.
The first thing we added was a short and sweet quickstart tutorial. Our Ansible playbook that guided the development process found its new home in that part of the documentation.
Next, we wrote down the general guides about the collection, explaining things like installation details, and describing the usage patterns. For example, we documented the high-level structure of module parameters that are common to all collection modules.
Reference documentation was the last thing we prepared. For our modules, we used a documentation extraction tool that rendered the built-in documentation into a set of ReStructuredText files. We still had to document roles manually, but this will hopefully change when Ansible gains support for the role argument specification. But that is a topic for another time.
As you might have noticed, tooling around Ansible Collection documentation is still pretty much in its infancy. But with a little bit of elbow grease and some custom-built tools, we can still produce a decent documentation site. And yes, we have a dedicated post about the documentation available as well. You are welcome ;)
And just like that, we made it to the end of our post. As you can see, creating an Ansible Collection is an excellent way of moving the complexity from the declarative world of Ansible playbooks and into the realm of Turing-compete programming languages. And this is something future us will greatly appreciate.
Developing a robust, well-documented Ansible Collection takes time, some rather Ansible-specific knowledge, and a great deal of ingenuity when it comes to publishing documentation. Is it worth it? Definitely!!!
You are welcome to try out our NGINX Unit Ansible Collection. If you are more interested in the development process of high-quality Ansible Collections, you can reach out to us or register for our free webinar on Testing Ansible Collection.