How did you get started with Ansible?

I saw a post by Michael Dehaan about a new configuration management system and thought “here we go, just what the world needs, yet another CM”

I was working at a banking/insurance company with locked down application servers with no root access or agents allowed. I had ssh and python though…

How long have you been using it?

My first commit was July 2012—so some time around then

What’s your favourite thing to do when you Ansible?

Mostly Kubernetes these days! But AWS works pretty well too

Making Kubernetes Easy With Ansible

Will Thames, Skedulo

October 2018

About Skedulo

  • Skedulo is the platform for intelligent mobile workforce management
  • Helps enterprises intelligently manage, schedule, dispatch, and track workers in the field
  • Typical use-cases include healthcare, property management, home improvement services

Skedulo Platform

  • AWS underlying infrastructure built using terraform
  • Kubernetes 1.9 built using kops
  • Ansible for everything on top
  • AWX for deployments
  • 10s of applications

Live Demo

Starts now!

ansible-playbook playbooks/eks.yml -vv -e @overrides.yml -e env=test

Live demo details

  • Create a brand new Kubernetes cluster in AWS’ Elastic Kubernetes Service
  • Creates Virtual Private Cloud
  • Creates Auto Scaling Group for worker nodes
  • Demonstrate some of the best practices for Kubernetes configuration management
  • From zero to ready-to-go cluster all in Ansible

Live demo architecture

About Kubernetes

  • Kubernetes is a platform for managing, scaling and deploying applications running on containers across distributed networks
  • Kubernetes continually runs a reconciliation loop to ensure the cluster is in the desired state and corrects it if possible

Kubernetes: a common language

Kubernetes allows the specification of common characteristics of applications and services:

  • an application’s deployable and its associated configuration (Pod)
  • how many replicas should run, and where; how should updates be handled (Deployment/DaemonSet)
  • what endpoints should be exposed to the applications (Service)
  • what traffic should go to the Service (Ingress)

Kubernetes: a common language

  • ConfigMap—one or more configuration items in key-value form. Useful for setting environment variables or specifying the entire contents of one or more files for a Pod
  • Secret—similar to ConfigMap but better protected from casual view

Kubernetes resource definitions

  • Resource definitions are in YAML form

    apiVersion: v1
    kind: ConfigMap
      name: my-config-map
      namespace: my-namespace
      hello: world
  • Typically managed by the kubectl command line tool kubectl apply -f resource.yml

Anti-pattern: using kubectl in playbooks

  • kubectl is awesome
  • But all the usual caveats to running commands apply
  • You have to do a template/kubectl/delete dance

Are there reasons to use kubectl?

  • kubectl does validation of resource definitions against their specification
  • kubectl can append hashes to ConfigMaps and Secrets to make them immutable
  • ad-hoc tasks:

    kubectl get configmap -n some-namespace some-config-map
    ansible -m k8s_facts -a 'namespace=some-namespace kind=ConfigMap \
      api_version=v1 name=some-config-map' localhost


  • Our goal is to use as much common code for Kubernetes management as possible
  • A single Ansible role that takes a set of resource manifests and ensures that Kubernetes meets those expectations
  • Ideally, one manifest template that works for most applications would be great, but harder

Ansible’s Kubernetes strengths

  • Templating
  • Roles
  • Hierarchical inventory
  • Secrets management
  • Modules, lookup plugins, filter plugins


  • Templates are super-powerful
  • Reuse resource definition files for all environments
  • Use a common language where possible - e.g. {{ kube_resource_name }} {{ kube_ingress_fqdn }} across all applications
  • Avoid control structures where possible

    replicas = {% 5 if env == 'prod' else 1 %}


    replicas = {{ kube_deployment_replicas }}

Templating dicts

Sometimes whole sections of manifests differ between environment

    {{ kube_ingress_annotations | to_nice_yaml(indent=2) | indent(4) }}


  • Roles are excellent for sharing a common set of tasks across multiple playbooks
  • One role should be suitable for almost all Kubernetes operations
  • We have moved from per-application roles to one generic role for all but a couple of applications

Hierarchical inventory

  • Some properties are the same across many applications within an environment
  • Some properties are the same across all environments for an application
  • Some properties can be composed from other properties
  • Some properties may need specific overrides for certain application/environment combinations
  • All of these needs are met by inventory groups

Avoiding hosts: localhost

  • Typically people will use hosts: localhost to talk to Kubernetes
  • This reduces the power of inventory and reuse

Using the runner pattern

  • Runner pattern uses hosts declarations like hosts: "{{ env }}-{{ app }}-runner" with e.g. -e env=test -e app=web
  • inventory hierarchies allow runners to gather their inventory from groups such as test, web and test-web
  • Set ansible_connection: local and ansible_python_interpreter: "{{ ansible_playbook_python }}" in the runner group_vars file

Flat vs hierarchical inventory

Generating inventory

  • Group combinations explode as applications and environments increase

  • It’s easy to get this wrong with standard hosts files

  • generator inventory plugin generates such group combinations from a list of layers

Standard hosts file







Generator plugin hosts file

# inventory.config file in YAML format
plugin: generator
strict: False
    name: "{{ environment }}-{{ application }}-runner"
      - name: "{{ environment }}-{{ application }}"
          - name: "{{ application }}"
              application: "{{ application }}"
          - name: "{{ environment }}"
              environment: "{{ environment }}"
      - name: runner
        - test
        - web
        - api


  • We use ansible-vault for all of our secrets
  • Kubernetes expects secrets to be base64 encoded
  • Use no_log with the k8s module when uploading secrets
  • Avoid vaulting whole variables files
  • Use ansible-vault encrypt_string to encrypt each secret inline
  • Don’t forget to use echo -n $secret | ansible-vault encrypt_string to avoid encrypting the newline!

Secrets in environment variables

  • Use a Secret resource to store secret environment variables
  • Use envFrom if you then want to include all the secrets from that resource

Secrets in environment variables

key1: !vault |
  KEY1: "{{ key1 | b64encode }}"

A Secret manifest

apiVersion: v1
kind: Secret
  name: my-secret-env
  namespace: my-namespace
  {{ my_secret_env | to_nice_yaml(indent=2) | indent(2) }}

Using the Secret

kind: Deployment
      - envFrom:
          - secretRef:
              name: my-secret-env


  • k8s—main module for managing Kubernetes resources
  • k8s_facts—useful for run-time querying of resources
  • aws_eks_cluster—manages AWS EKS clusters
  • azure_rm_aks—manages Azure Kubernetes Service clusters
  • gcp_container_cluster and gcp_container_nodepool—manage GKE clusters and node pools

k8s module

  • uses the same manifest definitions as kubectl
  • can take inline resource definitions, or src from file
  • inline definitions work well with template lookup definition: "{{ lookup('template', 'path/to/resource.j2') | from_yaml }}"
  • invoke once with a manifest containing a list of resources, or invoke in a loop over a list of resources
  • copes with Custom Resource Definitions (2.7)


  • yaml stdout callback plugin is great for having output match input
  • k8s lookup plugin returns information about Kubernetes resources
  • from_yaml and from_yaml_all (2.7) read from templates into module data
  • b64encode encodes secrets in base64
  • k8s_config_hash and k8s_config_resource_name for immutable ConfigMaps (likely 2.8)



  • We’re practising Continuous Delivery with Rolling Deployments and feature flags
  • Upgrade application
  • Enable feature flag
  • Realise feature is buggy
  • Disable feature flag

Why Immutable ConfigMaps?

  • Updating a ConfigMap used in a Deployment will not update the Pods in that Deployment
  • Rolling back to a previous configuration will not cause the Pods to pick up the ConfigMap or Secrets changes
  • kubectl rollout undo for emergency purposes will only roll back containers, not configuration

Immutable ConfigMaps

  • Name ConfigMaps based on a hash of its data
  • Reference this ConfigMap name in a Deployment
  • Changing a existing ConfigMap will change its name, triggering Pod updates
  • Rolling back a Deployment will then roll back to the old config
  • Use append_hash to generate immutable ConfigMaps
  • Use k8s_config_resource_name filter plugin to reference such ConfigMaps

Demo part two

Planned k8s improvements

  • append_hash will enable immutable ConfigMaps and Secrets (likely 2.8)
  • validate will return helpful warning and/or error messages if a resource manifest does not match the Kubernetes resource specification (likely 2.8)
  • wait will allow you to wait until the Kubernetes resources are actually in the desired state (hopefully 2.8)

Thanks for listening