The Developer Experience team actively uses AI to accelerate our own work, validate GitLab's AI capabilities from a developer perspective, and build institutional knowledge about effective AI-assisted engineering workflows. This page documents our tools, guidelines, and practices.
## Tools We Use
| Tool | Purpose | Primary Use Cases |
|------|---------|-------------------|
| **GitLab Duo** | GitLab's native AI assistant | Code suggestions, code review, merge request summaries, root cause analysis |
| **Claude (Anthropic)** | AI assistant for complex tasks | Writing, research, architecture discussions, long-form documentation |
| **gitlab-mcp** | MCP server connecting GitLab to AI tools | Giving AI assistants live access to GitLab projects, issues, MRs, and pipelines |
As GitLab's developer experience team, we prioritize GitLab Duo as our primary AI tool — both because it's our product and because using it as customer zero gives us direct feedback to improve it for all engineers.
## Guidelines and Principles
### What We Encourage
-**Dogfooding GitLab Duo first.** When an AI task can be done with GitLab Duo, use it. This generates real usage data, surfaces friction points, and creates feedback we can act on.
-**AI-assisted review, not AI-replaced review.** Use AI to help identify issues faster, not to skip human judgment on correctness or security.
-**Sharing what works.** When you find a useful prompt, workflow, or technique, document it here or share in our team channel so the whole team benefits.
-**Transparency with collaborators.** When AI meaningfully contributed to a deliverable (document, design, code), note it so collaborators understand the context.
### What We Avoid
-**Committing AI-generated code without review.** All code, AI-assisted or not, goes through standard review. AI output can be confidently wrong.
-**Sharing confidential data with external AI tools.** Do not paste internal customer data, unreleased feature details, or other confidential information into AI tools outside GitLab's boundaries. Follow [GitLab's AI acceptable use policy](/handbook/legal/acceptable-use-policy/).
-**Over-relying on AI for security-sensitive work.** Security decisions require human expertise. Use AI as a starting point, not the final word.
## Workflows
### Code Review Assistance
We use GitLab Duo Code Review to get an AI-generated summary and initial feedback before assigning human reviewers. This helps reviewers focus on higher-level concerns rather than mechanical issues.
**How we use it:**
1. Open a merge request and trigger Duo's code review summary
2. Address any straightforward issues flagged (typos, obvious style issues)
3. Assign human reviewers with the AI summary available for context
### Test Generation
For new features and bug fixes, we use AI to accelerate test scaffolding — generating initial test cases from a function signature or spec, which we then refine.
**Useful prompts:**
- "Generate RSpec unit tests for this method, covering edge cases for nil inputs and boundary values"
- "Given this failing test output, suggest what the root cause might be and how to fix it"
### Documentation Drafting
For handbook pages, runbooks, and design documents, we use AI to draft structure and boilerplate, then refine with team-specific knowledge and decisions.
### Incident Analysis
During and after incidents, we use GitLab Duo's root cause analysis feature on failing pipelines and use AI assistants to help parse large log outputs and identify patterns.
### Codifying Standards with MR Review Instructions
GitLab Duo's [custom MR review instructions](https://docs.gitlab.com/user/gitlab_duo/customize_duo/review_instructions/) let you define project-specific standards that Duo automatically enforces on every merge request review. Rather than relying on reviewers to catch the same patterns repeatedly, you write the rules once and Duo applies them consistently.
**How it works:**
Create a `.gitlab/duo/mr-review-instructions.yaml` file in your repository. Duo reads it whenever it reviews an MR and appends your custom rules to its standard review criteria. When it finds a violation, it comments: _"According to custom instructions in '[rule name]': [feedback]"_.
**Example — enforcing DevEx conventions:**
```yaml
instructions:
-name:Ruby conventions
fileFilters:
-"**/*.rb"
-"!spec/**/*.rb"
instructions:|
1. All public methods must have Sorbet type signatures
2. Prefer keyword arguments for methods with 3 or more parameters
3. Do not use `rescue Exception` — rescue specific error classes only
-name:Test quality
fileFilters:
-"spec/**/*.rb"
instructions:|
1. Every new spec must have a description that reads as a sentence
2. Avoid `allow_any_instance_of` — use proper doubles or dependency injection
3. Flag any test that makes real network calls without stubbing
-name:CI configuration
fileFilters:
-".gitlab-ci.yml"
-".gitlab/ci/**/*.yml"
instructions:|
1. New jobs must define a `stage` explicitly
2. Do not hardcode environment-specific values — use CI/CD variables
```
**What to codify:**
Good candidates for MR review instructions are standards that are:
- Frequently raised in human reviews but easy to miss
- Project-specific (not covered by linters or existing tooling)
- Clear enough to be checked mechanically (avoid vague rules like "write clean code")
**Tips:**
- Use `fileFilters` to scope rules to the right files — broad rules applied everywhere generate noise
- Number your instructions so Duo's feedback references them clearly
- Start with a small set of high-value rules and expand over time based on what human reviewers still catch
- Check the [GitLab Duo MR review instructions docs](https://docs.gitlab.com/user/gitlab_duo/customize_duo/review_instructions/) for the full YAML reference
### Baking Context into Repositories with CLAUDE.md and AGENTS.md
AI coding agents work best when they understand your project's conventions up front. Rather than explaining your codebase in every conversation, you can commit instruction files that agents automatically pick up whenever they work in your repository.
**CLAUDE.md** is read by Claude Code at the start of every session. Place one in the root of a repository and Claude will load it before doing any work — no prompting required. You can also add CLAUDE.md files in subdirectories; they load on demand when Claude reads files in that directory, which is useful in monorepos.
**AGENTS.md** is an [open standard](https://agents.md/) supported by multiple tools including OpenAI Codex, Cursor, and Windsurf. It serves the same purpose as CLAUDE.md but for a broader set of agents. If you want agent instructions that work across tools, AGENTS.md is the better choice.
Both files can coexist in the same repository without conflict.
**What to include:**
```markdown
# Commands
- Run tests: `bundle exec rspec`
- Lint: `bundle exec rubocop`
# Code Style
- Prefer keyword arguments for methods with 3+ parameters
# Repository Structure
- Feature code lives in `app/`, specs mirror this in `spec/`
- Never commit directly to the default branch (`main` or `master`)
# Off Limits
- Do not modify files in `db/migrate/` unless explicitly asked
- Do not change `.gitlab-ci.yml` without checking with the team
```
**Tips:**
- Keep files under ~200 lines — agents give less weight to long files
- Include the actual commands to run, not just tool names
- Use an "Off Limits" or "Never touch" section to prevent agents from modifying sensitive files (migrations, CI config, secrets)
- For monorepos, put a root-level file with shared conventions and per-package files for package-specific rules
**Generating a starting point with Claude Code:**
Run `/init` in Claude Code from the root of your repository and it will analyse your codebase and generate a CLAUDE.md draft you can refine.
### GitLab MCP Server
The [GitLab MCP server](https://docs.gitlab.com/user/gitlab_duo/model_context_protocol/mcp_server/) implements the [Model Context Protocol](https://modelcontextprotocol.io/)(MCP), a standard that lets AI assistants connect to external tools and data sources. With it configured, an AI tool like Claude Code can directly read issues, merge requests, pipelines, and project data from GitLab without you copying and pasting context manually.
**What it enables:**
- Ask your AI assistant about open issues or MR status without leaving your editor
- Have the AI create issues, comment on MRs, or trigger actions based on a conversation
- Provide the AI with live project context when debugging or planning work
**Setting it up**
GitLab's MCP server is available at `https://gitlab.com/api/v4/mcp` and uses OAuth for authentication. The HTTP transport is the recommended approach — no extra dependencies required.
_Claude Code:_
```shell
claude mcp add --transport http GitLab https://gitlab.com/api/v4/mcp
```
_Claude Desktop:_ Add the following to `~/Library/Application Support/Claude/claude_desktop_config.json`:
```json
{
"mcpServers":{
"GitLab":{
"type":"http",
"url":"https://gitlab.com/api/v4/mcp"
}
}
}
```
After adding the server, you'll be prompted to authorise via browser-based OAuth on first use.
**glab mcp (alternative)**
The GitLab CLI also ships an experimental MCP server via `glab mcp serve`, which exposes similar GitLab functionality via the stdio transport. This is useful if you prefer running the server locally or need to connect to a self-managed instance. Note this feature is marked experimental and may change.
## Experimentation and Dogfooding
As a DevEx team, we have a unique responsibility to experience GitLab's AI features as a typical engineering team would. This informs both our own roadmap and our feedback to GitLab product teams.
### How We Dogfood
- We use GitLab Duo in our day-to-day work and file issues for friction points or missing functionality
- When a new Duo feature ships, we aim to try it on real work within the sprint and share findings in our team retrospective
## Resources
-[AGENTS.md open standard](https://agents.md/)
-[GitLab Duo documentation](https://docs.gitlab.com/ee/user/gitlab_duo/)
-[GitLab MCP server documentation](https://docs.gitlab.com/user/gitlab_duo/model_context_protocol/mcp_server/)
Practical workflows the DevEx team has found genuinely useful. Add your own when you discover something worth sharing.
## Use Glean to Search Code and Prioritise Your Day
[Glean](https://www.glean.com/) is GitLab's enterprise search tool, connected to our internal systems including GitLab, Slack, Google Drive, and the handbook. Its AI assistant can answer questions across all of these sources in one place.
**Search for code and context:**
Rather than jumping between GitLab, Slack threads, and the handbook, ask Glean directly:
> "Where is the RSpec helper for mocking Gitaly calls?"
> "What was the decision behind how we structure CI job definitions in the main repo?"
> "Find the runbook for broken main branch incidents"
Glean searches across connected sources and surfaces the most relevant results with links.
**Ask for your top priorities:**
At the start of the day, ask Glean's AI assistant to summarise what needs your attention:
> "What are my top priorities today based on my open issues and recent Slack mentions?"
> "Summarise any unread mentions or threads from yesterday that I should follow up on"
> "What issues am I assigned to that have had recent activity?"
**Tips:**
- Glean works best for finding things you know exist but can't locate quickly — use it instead of searching GitLab and Slack separately
- Ask follow-up questions in the same conversation to refine results
- For handbook content, Glean is often faster than the built-in handbook search
## Create GitLab Issues from a Conversation
With the [GitLab MCP server](/handbook/engineering/infrastructure-platforms/developer-experience/ai/#gitlab-mcp-server) connected, you can ask Claude to create issues directly in GitLab without leaving your editor or browser.
**Example prompt:**
> "Create a GitLab issue in `gitlab-org/gitlab` titled 'Investigate flaky spec in pipeline_spec.rb'. Label it `type::bug` and `team::devex`. Set the description to include steps to reproduce and link to this pipeline run: [URL]."
Claude will call the MCP tool, create the issue, and return a link. Useful for capturing bugs, tasks, or follow-ups mid-flow without context-switching to the GitLab UI.
**Tips:**
- Specify the project path (`namespace/project`) explicitly — Claude won't guess it
- You can ask Claude to include structured content in the description (steps to reproduce, acceptance criteria, links) by describing it in your prompt
- Works well at the end of a debugging session: "Summarise what we found and create an issue to track the fix"
---
## Fetch MR Comments and Address Them in VS Code
When you have review feedback on a merge request, you can use Claude Code (with the GitLab MCP server) to pull the comments and work through them without leaving VS Code.
**Workflow:**
1. Open Claude Code in VS Code (`Ctrl+Shift+P` → `Claude Code`)
2. Ask Claude to fetch the MR comments:
> "Fetch the unresolved review comments on MR !12345 in `gitlab-org/gitlab`"
3. Claude retrieves the comments and lists them in context
4. Work through them together:
> "Address the comment on `app/models/pipeline.rb` line 42 — the reviewer is asking us to extract this into a separate method"
5. Claude makes the edit, you review the diff, and repeat for the next comment
**Tips:**
- Ask Claude to summarise the comments first before diving in: "Give me a grouped summary of the feedback by theme"
- For straightforward nits (style, naming, typos), you can batch them: "Address all the nit comments in one pass"
- For more complex feedback, work one comment at a time and verify each change before moving on
- Once done, you can ask Claude to post a reply on the MR confirming the comments have been addressed
---
## Enforce Team Standards Automatically with MR Review Instructions
Instead of leaving recurring review comments by hand, codify your standards in `.gitlab/duo/mr-review-instructions.yaml` and let GitLab Duo enforce them on every MR automatically.
**Quick setup:**
Create the file in your repository:
```yaml
instructions:
-name:My team standards
fileFilters:
-"**/*.rb"
instructions:|
1. Public methods must have Sorbet type signatures
2. Avoid `rescue Exception` — rescue specific error classes
3. Do not leave debugging output (puts, pp, binding.pry) in production code
```
Duo will include these checks whenever it reviews an MR touching matching files, and will comment with a reference to the rule name when it finds a violation.
**Good rules to start with:**
- Patterns your team raises in almost every review ("don't forget the type sig")
- Things linters don't catch (semantic or architectural conventions)
- Common mistakes specific to your codebase
See the MR review instructions section of the [main AI page](../) for a fuller example and the complete YAML reference.
## Use AI for an Initial MR Review Before Human Review
Before assigning reviewers, run your MR through an AI review pass to catch issues early — logic gaps, missing tests, unclear naming, style inconsistencies. This makes the human review faster and more focused on higher-level concerns.
**Workflow in Claude Code:**
1. With your branch checked out, ask Claude to review the diff:
> "Review the changes in this branch against main. Focus on correctness, test coverage, and anything a reviewer is likely to flag."
2. Claude reads the diff and provides structured feedback
3. Address any issues it raises before assigning human reviewers
Additionally, use [GitLab Duo Code Review](https://docs.gitlab.com/ee/user/project/merge_requests/duo_code_review.html) directly on the MR for an automated first-pass review comment.
**Tips:**
- Give Claude a focus area if relevant: "Pay particular attention to the database queries — I want to avoid N+1s"
- Ask it to check for missing edge cases: "Are there scenarios this change doesn't handle that the tests don't cover?"
- Treat the AI review as a checklist, not a gate — use your judgement on what to act on
- If the MR is large, ask Claude to prioritise: "List the three most important things a reviewer would raise"
---
## Fix Failing Pipelines with AI Assistance
When a pipeline fails, AI can help you move from "something is broken" to "here's the fix" faster by analysing logs, identifying root causes, and suggesting or applying fixes directly.
**Workflow in Claude Code:**
1. Grab the failing job log — either paste it directly or, with the GitLab MCP server connected, ask Claude to fetch it:
> "Fetch the log for the failing job in pipeline #12345 in `gitlab-org/gitlab`"
2. Ask Claude to diagnose the failure:
> "Analyse this CI log and identify the root cause of the failure"
3. If it's a code issue, ask Claude to fix it:
> "Apply the fix to the relevant file"
4. If it's a flaky test or environment issue, ask Claude to explain and suggest next steps
**GitLab Duo Root Cause Analysis:**
For pipeline failures on GitLab.com, use [GitLab Duo Root Cause Analysis](https://docs.gitlab.com/ee/user/gitlab_duo/use_gitlab_duo_chat_in_cicd.html) directly in the pipeline UI. Click the failed job, then **"Root cause analysis"** to get an AI-generated explanation without leaving the browser.
**Tips:**
- For long logs, ask Claude to focus: "Look for the first error in this log and ignore subsequent failures that are likely downstream"
- If the fix isn't obvious, ask for hypotheses: "What are the three most likely causes of this failure?"
- For recurring failures, ask Claude to check git history for context: "Has this test file changed recently in a way that could explain this failure?"
- After fixing, ask Claude to check for similar patterns elsewhere: "Are there other tests in this file that could fail for the same reason?"
## Generate Well-Formed Commit Messages
Poorly written commit messages make `git log` and `git blame` much less useful. Use AI to generate commit messages that are accurate, consistently formatted, and follow [conventional commit](https://www.conventionalcommits.org/) style.
**With Claude Code**, you can ask directly:
> "Write a commit message for the staged changes. Use conventional commit format with a short subject line and a brief body explaining the why."
Claude reads your staged diff and produces a message grounded in what actually changed — no more "fix stuff" commits.
**Bake your commit format into AGENTS.md:**
Rather than specifying your format on every prompt, add it to your repository's `AGENTS.md` or `CLAUDE.md` so all agents pick it up automatically:
```markdown
# Commit Messages
- Use conventional commit format: `type(scope): short description`
- Keep the subject line under 72 characters
- Use imperative mood ("add feature", not "added feature")
- Include a body when the change needs context beyond the subject line
- Valid types: feat, fix, chore, docs, refactor, test, ci
- Always link to the relevant issue number in the body if one exists
```
Once this is in your repo, Claude Code will follow these rules without being asked — including when generating commit messages mid-conversation.
**Tips:**
- If the diff touches multiple concerns, ask Claude to flag it: "Does this diff represent a single logical change, or should it be split into separate commits?"
- For squash commits before merging, ask for a summary commit message: "Summarise all the commits on this branch into a single conventional commit message"
- Review the generated message — Claude bases it on the diff, but you know the intent better than it does