Docker Compose has carved out an indispensable niche for local container-based development. Paired with Ansible for multi-environment orchestration, it unlocks game-changing simplicity, portability and repeatability for deploying applications.

This comprehensive guide dives deep on integrating Ansible and Docker Compose to streamline application development, testing, deployment and ongoing operations. You‘ll gain holistic understanding, concrete examples and expert perspectives for unlocking the immense power at the intersection of infrastructure automation and containerization.

By end of reading, you‘ll be able to:

  • Deploy apps via Docker Compose multi-host orchestration with Ansible
  • Network, load balance and scale containerized services
  • Centralize and parameterize environment configurations
  • Automatically rebuild and roll out application updates
  • Visualize the system architecture across environments
  • Instrument monitoring and utilize Ansible‘s self-healing capabilities
  • Achieve immutable infrastructure through Docker-in-Docker
  • Extend functionality by developing custom Ansible Collections

Let‘s get started!

Why Ansible and Docker Compose?

First, some quick background. AnsibleJump to section is an open source automation platform for provisioning infrastructure, deploying software, and orchestrating advanced IT tasks. It uses simple declarative YAML playbooks to automate everything from network devices to Kubernetes clusters at scale.

Docker Compose is a tool that defines and runs multi-container Docker apps encoded in easy-to-read YAML files. It provides a standardized way to build container images, connect them together, and externalize environment configurations for smooth portability between environments.

So why leverage them together? Compose simplifies running containers during development. Ansible extends this simplicity to managing those containers consistently across all your environments. By combining them, you transcend local Docker convenience to achieve:

  • Multi-machine service deployment: Orchestrate Compose apps on any machine managed by Ansible

  • Single source of truth: Define app architecture in Compose then deploy exact versions with Ansible

  • Infrastructure consistency: Enforce standardized container runtimes across environments

  • Environment configuration: Parameterize app configs using Ansible variables

  • Simpler scaling: Add hosts to Ansible inventory instead of restructuring Compose files

  • Rolling updates: Schedule zero-downtime application upgrades across fleets of containers

  • Reusable build pipelines: Ansible playbooks codify immutable deploy processes between environments

According to Gartner, over 75% of businesses will be using container technology by 2022. Paired with the increased focus on DevOps and infrastructure automation, rampant container adoption makes tools like Ansible and Docker Compose integral for engineering teams seeking productivity and portability.

[Ref 1: Gartner – https://www.gartner.com/en/newsroom/press-releases/2019-08-01-gartner-says-by-2022-more-than-75–of-global-organization]

Now that the why is clear, let‘s get into the details.

Comprehensive 3-Tier Web Application Example

Rather than covering concepts in isolation, we‘ll go through a comprehensive example deployment to cement understanding of how Ansible and Docker Compose work together in practice.

Imagine Acme Company has a customer-facing web application with three tiers:

  • Load balanced web servers
  • Centralized Redis for caching
  • Persistent MySQL database

Here‘s how Ansible + Docker Compose can deploy the full environment.

Application Architecture Diagram

Directory Structure

├── group_vars
│   └── all
├── inventory    
├── playbook.yml
├── requirements.yml
├── roles
│   └── docker
├── src
│   └── docker-compose.yml   

This structure allows codifying everything related to the app in one central location:

  • group_vars – Environment variables
  • inventory – Managed servers
  • playbook.yml – Orchestration playbook
  • requirements.yml – External role dependencies
  • roles/* – Ansible roles
  • src/* – Application code

Nice and tidy! Now onto functionality.

Provision Docker Engine

First step is ensuring a consistent Docker base layer across all the nodes:

roles/docker/tasks/main.yml

- name: Install Docker dependencies
  apt: 
    name: "{{ item }}"
    update_cache: yes 
    state: latest
  loop: 
    - apt-transport-https
    - ca-certificates 
    - curl
    - gnupg-agent
    - software-properties-common

- name: Add Docker repo GPG key 
  apt_key:
    url: https://download.docker.com/linux/ubuntu/gpg
    state: present  

- name: Add Docker repo 
  apt_repository:
    repo: deb https://download.docker.com/linux/ubuntu focal stable

- name: Install Docker
  apt: name=docker-ce state=latest update_cache=true
  notify: restart docker

This role handles the full Docker installation process including adding repos, installing packages, and starting associated services.

Running it ensures all container hosts have the proper Docker environments initialized before deploying applications.

Define Containers in Docker Compose

The multi-service app landscape is defined in docker-compose.yml.

Here are relevant snippets:

services:

  web:
    image: nginx:stable-alpine 
    depends_on:
      - cache
    networks: 
      - frontend  
    deploy: 
      replicas: 3  

  cache:
    image: redis:latest    
    networks: 
      frontend: 
        aliases:
          - redis 

  db:
    image: mysql 
    environment:
      MYSQL_ROOT_PASSWORD: dbpassword 
    networks:
      - backend
    volumes:  
      - db-data:/var/lib/mysql

networks:

  frontend:
  backend:

volumes:

  db-data: 

This encodes the structure visualized previously:

  • Load balanced Nginx web servers
  • Centralized Redis caching backend
  • Persistent MySQL database

By externalizing the architecture here, we abstract infrastructure definitions from application code for enhanced portability across environments.

Multi-Host Orchestration with Ansible

Next create an Ansible playbook to facilitate multi-host Docker Compose deployment:

- hosts: webservers

  roles:
    - docker

  tasks:
    - name: Start stack 
      community.docker.docker_compose:
        project_src: ./src
        files:
          - docker-compose.yml
        state: present

The play executes against the webservers group defined in Ansible‘s inventory file. This allows targeting multiple machines in a single playbook run.

It first applies the docker role to install the engine, then utilizes the docker_compose module from the Ansible Community Docker collection to deploy the full defined stack across the designated hosts.

Now by running:

ansible-playbook playbook.yml  

You‘ve automated the orchestration of the multi-tier app onto an entire cluster of machines with ONE command!

This immensely simplifies sharding services across infrastructure while retaining the portability benefits of Compose for local development.

Additionally, the playbook codifies the ordered steps for standing up new environments, making setup reproducible and version controlled.

Configuring Environment Variables

Recall one core value proposal is externalizing environment specific configurations for enhanced composability.

Ansible group variables make this simple:

group_vars/all

project_name: "acme"
redis_host: "redis.internal"
mysql_root_password: "s1mpl3"  

Access them in the playbook:

community.docker.docker_compose:
  project_name: "{{ project_name }}"
  services:
    cache:
      environment: 
        REDIS_HOST: "{{ redis_host }}"
    db:
      environment:
        MYSQL_ROOT_PASSWORD: "{{ mysql_root_password }}"  

Now your app Container configurations can change per environment without rebuilding images.

Bonus: secrets can be encrypted with Ansible Vault for added security!

Load Balancing Across Containers

Scaling to multiple web servers allows increased traffic throughput.

Use Docker Compose routing mesh for building in load balancing:

#...
services:

  web:
    deploy:
      replicas: 5
      update_config:
        parallelism: 2  
        order: start-first
#...

The replicas field spins up additional containers, automatically registering them as targets in the internal load balancer. Ansible assigns them across the resource pool according to host capacity.

update_config orchestrates rolling updates when redeploying so a minimum of 2 web servers are always available.

No extra load balancer configuration needed!

Automated Build and Deploy Pipelines

Updating apps mandates rebuilding images with configuration changes then redeploying the updated application.

Ansible simplifies automating the end-to-end pipeline:

- name: Build images
  community.docker.docker_compose:
    project_src: ./src  
    build: true

- name: Recreate containers
  community.docker.docker_compose:
    project_src: ./src
    state: present
    recreate: always  

Just set build: true to rebuild images between Compose files and source code, then recreate: always to coordinate rolling updates.

Bonus: Wrap in an Ansible role packageable across projects!

Now instead of 100 steps to coordinate releases, it‘s two commands thanks to the power of automation and Infrastructure as Code.

Monitoring, Metrics and Auto-Healing

In terms of operational visibility, Ansible integrates tightly with metric gathering and monitoring tools for containerized infrastructure:

playbook.yml

- hosts: webservers 

  tasks:
    - import_role: 
        name: datadog
        tasks_from: docker-agent.yml

    - name: Install test containers
      docker_container:
        name: "{{ item }}"
        image: alpine
        command: "ping docker.com"
      loop: 
        - c1   
        - c2

    - name: Check container metrics
      uri: 
        url: "http://datadog:8000/api/v1/query"
        return_content: yes 
      register: metrics  

    - debug:
        msg: "{{ metrics.content | from_json | json_query(‘[*].series[*].last_time‘) }}"

This installs the Datadog agent for full-stack monitoring, spins up sample containers, queries metrics throughput the Datadog API, and prints the output.

Building on this foundation, utilize Ansible to:

  • Instrument tracing – Profile container service performance
  • Create dashboards – Graph platform KPIs for observability
  • Alert capacity thresholds – Notify teams when metrics degrade
  • Auto-remediate failures – Restart crashed containers automatically

Now you gain holistic visibility into the health and performance of your Dockerized apps!

Achieving Immutable Infrastructure

"Infrastructure as code" remains an elusive ideal without concrete steps to operationalize it.

Containerizing Ansible itself unlocks immutable infrastructure:

Dockerfile

FROM python:3.8-slim

COPY requirements.txt ./  
RUN pip install --no-cache-dir -r requirements.txt

COPY . . 

CMD [ "ansible-playbook", "-i", "inventory", "main.yml" ]  

Build

docker build -t ansible . 

Run

docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock ansible

This portable Ansible image now orchestrates the container deployments on underlying hosts.

You‘ve shifted from manually run Ansible on workstations to self-contained Ansible instances operating reproducibly across any Docker environment.

Standardize the immutability by integrating the build into your CI pipeline for continuous delivery of infrastructure changes.

Now real "Infrastructure as Code" is realized through comprehensive version control and image-based execution.

Extending Ansible for Custom Container Ops

As mentioned earlier, a core benefit of Ansible is simplified automation of advanced operational tasks from load balancing to zero-downtime deployments.

But what if Ansible doesn‘t have a module tailored to your needs?

Thankfully you can develop custom modules and plugins to fit any use case!

For example, building specialized Docker modules:

from ansible.module_utils.basic import *
import docker

if __name__ == ‘__main__‘:

    module = AnsibleModule(
        argument_spec = dict(
            image = dict(required=True, type=‘str‘),    
            cmd = dict(required=False, type=‘str‘)
        )
    )

    client = docker.from_env()         

    container = client.containers.run(
        module.params[‘image‘],
        command=module.params[‘cmd‘],
        detach=True
    ) 

    module.exit_json(container_id=container.id)

Save as docker_run.py.

Now create the collection structure:

└── ansible_collections
    └── my_namespace
        └── my_collection
            ├── plugins
            │   └── modules 
            │       └── docker_run.py
            └── galaxy.yml

And install it:

ansible-galaxy collection install ansible_collections/my_namespace/my_collection/

Then use the module simply like any other:

- my_namespace.my_collection.docker_run:
    image: "alpine"
    cmd: "sleep infinity"  

Possibilities explode for tailoring automation workflows precisely to your stack and needs. Develop modules for your specialized applications, hardware configurations, optimizing routing – you name it.

Custom Ansible plugins and collections will accelerate operations in your unique environment. Take advantage of this incredible extensibility!

Key Takeaways

We covered immense ground deploying a complex multi-tier app to production while emphasizing portability, automation and architectural best practices. Let‘s drive home some key learnings:

Simplified Multi-Host Orchestration

Docker Compose = excellent local development. Ansible = simple production deployment across fleets of servers. Combined together they make horizontally scaling infrastructure disappear behind code.

Single Source of Truth

By defining application architecture in Compose files and deploying through Ansible, you benefit from both standardized containers and simplified environment configuration.

Enforced Consistency

Playbooks containerizing infrastructure provisioning facilitate uniform runtimes across environments. Now expectations set locally align smoothly with production reality.

Configurability

Group variables allow externalizing environment specific parameters for true multi-environment composability. Change database hosts across dev, test and prod in a single line.

Operational Visibility

Monitor scheduling strategies, resource usage, service communication, log events and more to optimize performance. Generate alerts for health thresholds and enable auto-healing.

Infrastructure Immutability

Containerizing Ansible itself allows you to version and distribute automation pipelines across teams. Achieve consolidated control through comprehensive infrastructure definitions.

Extensibility

When you need customized logic, create custom Ansible collections with specialized modules tailored to your stack and needs, vastly expanding automation possibilities.

Conclusion

Docker helps developers focus on applications instead of environment configuration. Ansible allows sysadmins to focus on automation strategies instead of performing repetitive manual operations. Individually they simplify software delivery and IT management – together they take it to the next level.

Adopting this robust combination paves the smooth path towards scalable and reliable CI/CD pipelines. Just remember to codify and centralize everything such that your app deployment and management processes become baked into code and composable across environments.

Now as you streamline your infrastructure, enable developers, delight customers and evolve architectures – you‘ll have the power to simplify, automate and sensibly govern everything involved through the versatility of Ansible and Docker.

So next time you or a team member attempts to manually ssh and run commands across servers to set up an application stack….just don‘t – utilize Ansible and Docker Compose instead!

Similar Posts