Workshop Goal: Learn how to embed AI agents into developer workflows to automatically enforce standards and handle the tasks humans consistently skip.
Your team has standards. They're written down somewhereβmaybe in a wiki, a README, or that Confluence page from 2023. The problem isn't that standards don't exist.
The problem is that no one follows them consistently.
Why? Because humans are humans:
- You love solving problems and writing code
- You don't love writing documentation, updating changelogs, or filling out PR templates
- Deadlines create shortcuts, and shortcuts become habits
No blame. No shame. You're only human.
What if the things you consistently skip... just happened automatically?
| Human Tendency | Agent Solution |
|---|---|
| "I'll document this later" (you won't) | Agent generates docs on every commit |
| "The PR description is good enough" | Agent enforces PR templates and adds context |
| "I know what this code does" (for now) | Agent adds inline comments and README updates |
| "Security review can wait" | Agent runs compliance checks before merge |
The pattern: Encode your standards into agents, then embed those agents into workflows where they execute automatically.
We'll build up from manual prompts to fully automated pipelines.
Start by creating reusable prompts that any team member can invoke on-demand. This is the lowest friction entry pointβno infrastructure changes, no CI/CD modifications.
In GitHub Copilot, you can create Slash Commands by following the folder/naming convention:
<repo-root>/.github/prompts/<prompt-name>.prompt.md
Create a prompt at .github/prompts/write-docs.prompt.md:
You are a technical documentation specialist. When invoked:
1. Analyze the current file or selection
2. Generate appropriate documentation based on the content type:
- For functions/methods: JSDoc/docstrings with params, returns, and examples
- For classes: Class-level documentation with usage examples
- For modules: README sections with purpose, installation, and API reference
3. Follow our team's documentation standards:
- Use present tense ("Returns" not "Will return")
- Include at least one usage example
- Document edge cases and error conditions
4. Output the documentation in a format ready to paste or commitTeam members invoke it locally in VS Code:
This repo includes three production-ready prompts for Azure Kubernetes Service (AKS) operations. These demonstrate how platform teams can encode operational runbooks into reusable agent prompts.
You might ask: "I already know kubectl. Why do I need an agent prompt for this?"
Here's how this ties back to our Act 2 problem statement:
| The Human Reality | The Agent Solution |
|---|---|
At 3am during an incident, you (or a team member) forget which kubectl flags to use |
Agent runs the complete diagnostic workflow every time |
| Senior engineer knows to check node pressure after pod failuresβjunior doesn't | Agent encodes the full troubleshooting sequence for everyone |
| Under pressure, you skip steps or jump to conclusions | Agent follows the same systematic process regardless of stress |
| Runbook in Confluence says "check pods" but not how | Prompt IS the executable runbookβno interpretation needed |
| New team member takes months to learn incident response | New hire invokes the prompt on day one |
The core insight: The kubectl commands aren't the hard part. The hard part is:
- Knowing which commands to run, in what order
- Interpreting the output correctly
- Connecting symptoms to root causes
- Remembering all of this at 3am when paged
These prompts encode your senior engineers' diagnostic intuition into something any team member can invokeβconsistently, completely, every time.
1. Check Pod Health (aks-check-pods.prompt.md)
Diagnose unhealthy pods across your AKS cluster:
# Check for Pod Health Issues
Check the health status of all pods in an Azure Kubernetes Service (AKS)
cluster and identify any pods that are not in a 'Running' state.
Provide a summary of the issues found and suggest possible remediation steps.
### Run these Commands
- kubectl get pods -n <namespace>
- kubectl describe pod <pod-name> -n <namespace>
- kubectl logs <pod-name> -n <namespace>
### Output
A report including: Cluster Name, Pod Name, Pod Status, Issues Found,
and Suggested Remediation Steps.
### Remediation Suggestions
- Check for resource constraints (CPU, memory)
- Review pod logs for errors
- Scale the cluster if resource limits are being hit
- Redeploy the pod if it is in a crash loop
### Note
Do not generate any scripts. Only provide analysis and suggestions.Use case: On-call engineer needs to quickly triage pod issues without memorizing all the kubectl commands and diagnostic steps.
2. Check Node Health (aks-check-nodes.prompt.md)
Identify and diagnose unhealthy nodes in your cluster:
# Check for AKS Nodes Health Issues
Check the health status of all nodes in an Azure Kubernetes Service (AKS)
cluster and identify any nodes that are not in a 'Ready' state.
### Run these Commands
- kubectl get nodes
- kubectl describe node <node-name>
- kubectl top nodes
- kubectl cluster-info
### Output
A report including: Cluster Name, Node Name, Node Status, Issues Found,
and Suggested Remediation Steps.
### Remediation Suggestions
- Check for resource constraints (CPU, memory)
- Review node logs for errors
- Scale the cluster if resource limits are being hit
- Contact Azure support if the issue persistsUse case: Platform engineer investigating cluster-level issues, capacity problems, or node failures.
3. Remediation Assistant (aks-remediation.prompt.md)
After diagnosis, get specific remediation guidance:
# AKS Remediation for cluster issues
Provide remediation based on analysis and suggestions from the previous steps.
### Proposed Remediation Steps
Be specific in your remediation suggestions, including commands to run,
configuration changes to make, or resources to consult.
Tailor the suggestions based on the identified issues.
### Notes
- Do not generate any scripts.
- Always ask for confirmation before applying any remediation steps.Use case: After running diagnostics, the engineer invokes this prompt to get specific, actionable remediation stepsβwithout the agent making changes autonomously.
These three prompts demonstrate a chained workflow:
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β aks-check-pods β βββΆ β aks-check-nodesβ βββΆ β aks-remediation β
β (What's wrong?)β β (System-level?)β β (How to fix?) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
Each prompt is:
- Focused β one responsibility per prompt
- Safe β analysis only, no autonomous changes
- Actionable β provides specific next steps
- Reusable β any team member can invoke without deep Kubernetes expertise
Remember: "Standards exist, but they're not enforced."
Your team probably has incident response standards:
- "Always check pod status before escalating"
- "Look at node health if multiple pods are failing"
- "Document what you tried before handing off"
But under pressure, these standards get skipped. The agent ensures the standard diagnostic process runs every timeβnot because humans are bad, but because humans are human.
This is the same pattern as auto-generating documentation: encode the standard into an agent, then let the agent enforce it consistently.
Time: 15 minutes
- Identify one task your team does inconsistently (docs, tests, PR descriptions, etc.)
- Write a prompt that handles that task
- Save it to
.github/prompts/ - Have a teammate test it on their code
Discussion: What standards surfaced while writing this prompt that weren't explicitly documented?
Manual prompts are great, but they still require humans to remember to run them. The next level: automate execution in your CI/CD pipeline.
Every push, every PRβthe agent runs whether you remembered or not.
We'll use GitHub Actions to trigger our prompts automatically. See the complete example at .github/workflows/copilot.generate-docs.yml.
name: Auto-Generate Documentation
on:
push:
branches: [main]
workflow_dispatch: # Manual trigger option
jobs:
generate-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install GitHub Copilot CLI
run: npm install -g @github/copilot
- name: Generate Documentation
env:
GITHUB_TOKEN: ${{ secrets.COPILOT_PAT }}
run: |
# Load prompt content
PROMPT=$(cat .github/prompts/write-docs.prompt.md)
# Execute against changed files
copilot -p "$PROMPT"| Component | Purpose |
|---|---|
Trigger: push to main |
Ensures docs are generated on every merge, regardless of whether the developer ran the prompt locally |
Trigger: workflow_dispatch |
Allows manual execution for catch-up or testing |
| GitHub Copilot CLI | Enables scripted/automated execution of Copilot outside of the IDE |
| Prompt as environment variable | Loads your team's custom prompt for consistent execution |
Note
GitHub Copilot CLI Authentication
Currently, GitHub Copilot is licensed per-user, meaning API calls must be authenticated with a user account. For CI/CD automation:
- Create a Fine-Grained Personal Access Token (PAT) with
Copilot-Requests: Read-onlypermission - Store it as a repository secret (e.g.,
COPILOT_PAT) - This consumes Premium Request Units (PRUs) from the token owner's account
Future: GitHub is investigating organization-level Copilot API access for CI/CD scenarios (e.g. GitHub Actions and GitHub Apps).
Run security-focused prompts on infrastructure or sensitive code changes:
on:
pull_request:
paths:
- 'infrastructure/**'
- '**/security/**'
- '**/*.tf'
jobs:
security-review:
steps:
- name: Security Compliance Review
run: |
export PROMPT=$(cat .github/prompts/security-baseline.prompt.md)
copilot -p "$PROMPT"Time: 30 minutes
- Choose a standard your team has but doesn't consistently enforce
- Write the prompt that checks or generates compliance with that standard
- Create a GitHub Action that runs this prompt on appropriate triggers
- Test it by pushing a commit that would normally skip this standard
Checkpoint Questions:
- What triggers make sense for this automation? (push, PR, schedule, manual)
- Should failures block the pipeline or just warn?
- How will you handle false positives?
- Crawl: Start with reusable prompts team members run manuallyβlow friction, immediate value
- Walk: Add CI/CD triggers so prompts run automatically on key events
- Run: Build comprehensive pipelines that enforce multiple standards consistently
- The meta-benefit: Writing prompts forces you to articulate standards that were previously implicit
| Pitfall | Solution |
|---|---|
| Over-automation too fast | Start with advisory (non-blocking) automation, then tighten |
| Prompts that are too generic | Include your team's specific standards and examples |
| Ignoring false positives | Build in human override mechanisms and feedback loops |
| Token/PRU budget surprises | Monitor usage and set alerts on consumption |
GitHub Copilot CLI:
Example Prompts in This Repo:
- aks-check-pods.prompt.md β Diagnose unhealthy pods
- aks-check-nodes.prompt.md β Diagnose unhealthy nodes
- aks-remediation.prompt.md β Get remediation guidance
- analyze-for-docs.prompt.md β Generate documentation
Example Workflows:
In Act 3, we'll explore how agents can help with operational challengesβdebugging live systems, incident response, and maintaining production infrastructure.
