Skip to content
Back to Interview Guides
Interview Guide

30 DevOps Challenges for Junior, Mid and Senior Engineers

· 9 min read

DevOps Challenges for Automation and Infrastructure

In the fast-paced world of software development, DevOps is the crucial bridge between creating and deploying reliable software. For engineers, a strong command of automation, infrastructure as code (IaC), and CI/CD pipelines is essential for building and maintaining scalable, resilient systems. For employers, finding talent who can streamline development cycles and improve operational efficiency is a top priority.

That’s why we’ve put together this collection of 30 hands-on DevOps challenges. We’ve organized them into three levels—10 for Junior, 10 for Mid-Level, and 10 for Senior engineers—to help you build your skills, prepare for your next interview, or find the perfect candidate for your team.

Jump to Your Level

Junior Engineer ⚙️ Mid-Level Engineer Senior Engineer

Junior Engineer Challenges

1. Write a Dockerfile for a Simple Web App

Create a Dockerfile for a simple Node.js or Python web application.

FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]

2. Create a Simple CI Pipeline with Jenkins

Write a Jenkinsfile to check out code from a Git repository and run a simple command.

pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                echo 'Building...'
                sh 'npm install'
            }
        }
    }
}

3. Write a Bash Script to Automate a Simple Task

Write a Bash script to back up a directory to a .tar.gz file.

#!/bin/bash
tar -czvf backup.tar.gz /path/to/directory

4. Create a Simple Ansible Playbook

Write an Ansible playbook to install a package on a remote server.

- hosts: webservers
  tasks:
    - name: Install nginx
      apt:
        name: nginx
        state: present

5. Set up a Git Repository and Practice Branching

Initialize a Git repository, create a new branch, make a commit, and merge it back to the main branch.

git init
git checkout -b new-feature
# ... make changes ...
git add .
git commit -m "Add new feature"
git checkout main
git merge new-feature

6. Write a Simple Terraform Configuration

Write a Terraform configuration to create a new AWS S3 bucket.

resource "aws_s3_bucket" "b" {
  bucket = "my-tf-test-bucket"
  acl    = "private"
}

7. Monitor a Service with Prometheus

Write a simple Prometheus configuration to scrape metrics from a running service.

scrape_configs:
  - job_name: 'my-app'
    static_configs:
      - targets: ['localhost:3000']

8. Create a Simple Docker Compose File

Write a Docker Compose file to run a web application and a database.

version: '3'
services:
  web:
    build: .
    ports:
      - "3000:3000"
  db:
    image: "postgres"

9. Set up a Simple AWS S3 Bucket

Use the AWS CLI to create a new S3 bucket.

aws s3 mb s3://my-bucket

10. Write a Simple Shell Script to Check Service Status

Write a shell script to check if a service is running.

#!/bin/bash
if systemctl is-active --quiet nginx; then
    echo "nginx is running"
else
    echo "nginx is not running"
fi

Mid-Level Engineer Challenges

1. Create a Multi-Stage Docker Build

Write a Dockerfile that uses a multi-stage build to create a smaller final image.

# Build stage
FROM node:14 as builder
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

# Final stage
FROM nginx
COPY --from=builder /usr/src/app/build /usr/share/nginx/html

2. Build a CI/CD Pipeline with Automated Testing

Write a Jenkinsfile that includes a testing stage before deploying the application.

pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                sh 'npm install'
            }
        }
        stage('Test') {
            steps {
                sh 'npm test'
            }
        }
        stage('Deploy') {
            steps {
                echo 'Deploying...'
            }
        }
    }
}

3. Write a Complex Ansible Role

Create an Ansible role to install and configure a web server with a custom configuration.

# In roles/webserver/tasks/main.yml
- name: Install nginx
  apt:
    name: nginx
    state: present
- name: Copy nginx config
  copy:
    src: nginx.conf
    dest: /etc/nginx/nginx.conf

4. Manage Infrastructure with Terraform Modules

Create a Terraform module to manage a reusable piece of infrastructure, like a VPC.

# In modules/vpc/main.tf
resource "aws_vpc" "main" {
  cidr_block = var.cidr_block
}

# In main.tf
module "vpc" {
  source = "./modules/vpc"
  cidr_block = "10.0.0.0/16"
}

5. Set up a Kubernetes Deployment

Write a Kubernetes manifest to deploy a simple application.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app:1.0.0
        ports:
        - containerPort: 3000

6. Create a Custom Grafana Dashboard

Create a Grafana dashboard to visualize metrics from Prometheus.

# This is a visual task that would be done in the Grafana UI.

7. Implement a Canary Deployment Strategy

Write a Kubernetes manifest that uses a canary deployment strategy.

# This would involve creating two deployments, one for the stable version and one for the canary version, and a service that routes traffic to both.

8. Automate Cloud Backups

Write a script to automate backups of an AWS EC2 instance.

# This would involve using the AWS CLI to create a snapshot of the EC2 instance's EBS volume.

9. Set up a Private Docker Registry

Use Docker to run a private Docker registry.

docker run -d -p 5000:5000 --name registry registry:2

10. Write a Python Script for Cloud Automation

Write a Python script using the Boto3 library to list all S3 buckets.

import boto3

s3 = boto3.client('s3')
response = s3.list_buckets()

for bucket in response['Buckets']:
    print(f'  {bucket["Name"]}')

Senior Engineer Challenges

1. Design a Highly Available and Scalable Architecture

Design a cloud architecture that can handle a large amount of traffic and is resilient to failure.

# This would involve using a load balancer, auto-scaling groups, and a multi-AZ database.

2. Implement a GitOps Workflow with Argo CD

Set up a GitOps workflow to automatically deploy changes to a Kubernetes cluster when they are pushed to a Git repository.

# This would involve installing Argo CD on a Kubernetes cluster and configuring it to watch a Git repository.

3. Create a Custom Terraform Provider

Write a custom Terraform provider to manage a resource that is not supported by any of the official providers.

# This is a complex task that would require a lot of code.
# A full implementation would be too long for this format.

4. Set up a Service Mesh with Istio

Set up a service mesh to manage traffic between microservices in a Kubernetes cluster.

# This would involve installing Istio on a Kubernetes cluster and configuring it to manage traffic between services.

5. Implement a Chaos Engineering Experiment

Use a tool like Chaos Monkey to randomly terminate instances in a production environment to test for resiliency.

# This would involve deploying Chaos Monkey to a production environment and configuring it to randomly terminate instances.

6. Design and Implement a Centralized Logging System

Design and implement a centralized logging system using the ELK stack (Elasticsearch, Logstash, and Kibana).

# This would involve setting up an ELK stack and configuring applications to send logs to it.

7. Automate Security Compliance and Auditing

Use a tool like Chef InSpec to automate security compliance and auditing.

# This would involve writing InSpec profiles to check for compliance with security policies and running them against servers.

8. Create a Jenkins Shared Library

Create a Jenkins shared library to share common pipeline code across multiple projects.

# This would involve creating a Git repository with the shared library code and configuring Jenkins to use it.

9. Set up a Multi-Cloud Infrastructure

Use Terraform to manage infrastructure across multiple cloud providers.

# This would involve using the Terraform providers for each cloud provider to manage infrastructure.

10. Optimize Cloud Costs with Automation

Write a script to automatically shut down idle resources to save money.

# This would involve using the AWS CLI or Boto3 to identify idle resources and shut them down.

Tips to Prepare for DevOps Challenges

  • Understand the “Why”: Don’t just learn how to use a tool; understand the problem it solves. Why use Kubernetes? Why is Infrastructure as Code important?
  • Master the Core Tools: Have hands-on experience with the fundamentals: Git, a CI/CD tool (like Jenkins or GitLab CI), a configuration management tool (like Ansible), and a cloud provider (like AWS).
  • Practice with Real-World Scenarios: The best way to learn is by doing. Set up your own projects and practice building and deploying them.
  • Think About the Big Picture: DevOps is not just about tools; it’s about culture and process. Be prepared to discuss how you would improve a team’s development lifecycle.
  • Stay Up-to-Date: The DevOps landscape is constantly evolving. Keep up with the latest tools and trends.

Conclusion

DevOps is a critical and exciting field that blends software development with IT operations. By practicing these challenges, you’ll be well-prepared to tackle any interview and build the automated, scalable, and resilient systems that power modern software. Keep learning and automating!

Skip the interview marathon.

We pre-vet senior engineers across Asia using these exact questions and more. Get matched in 24 hours, $0 upfront.

Get Pre-Vetted Talent
WhatsApp