[AI Impact Analytics] Duo Usage Dashboard
## Problem to solve
TL;DR: "Are my developers using Duo? How are they using it?"
Customers have asked for insight into:
- Which Duo features users are using?
- How frequently are they using them?
- Assigned seats that are not being utilized by the individual to which they are assigned.
## Proposal
### User-level metrics
Within AI Impact Analytics:
- Provide a breakdown of feature consumption/frequency per assigned duo seat.
- Duo chat: `x` questions asked
- Code suggestions: count of suggestions accepted
- ...repeat for each unit primitive (https://gitlab.com/gitlab-org/gitlab/-/issues/480067+)
- Bonus: User-definable time range or a way to select `last 7 days`, `last 30 days`, ...
### Iteration path
#### Step 1 - GraphQL API**. Something like the following
```graphql
query($fullPath: ID!, $startDate: Date, $endDate: Date) {
namespace(fullPath: $fullPath) {
aiMetrics(startDate: $startDate, endDate: $endDate) {
duoUsers {
count
edges {
cursor
node {
user: UserCore!
codeSuggestionsAccepted: Int ## Count of code suggestions accepted
duoChatInteractions: Int ## Count of total interactions with Duo Chat
vulnerabilitiesExplained: Int ## Count of vulnerabilities explained with Duo
vulnerabilitiesResolved: Int ## Count of vulnerabilities resolved with Duo
cliInteractions: Int ## Count of interactions with glab_ask_git_command
discussionsSummarized: Int ## Count of work item discussions summarized
codeExplained: Int ## Count of explain_code interactions
rootCauseAnalysisPerformed: Int ## Count of troubleshoot_job interactions
}
}
nodes: [DuoUser]
pageInfo
}
}
}
}
```
#### Step 2 - Split Duo Usage into its own dashboard
https://gitlab.com/gitlab-org/gitlab/-/issues/455860/designs/AI_adoption_-_iteration_1.png
## Assumptions
- Why not showing the acceptance rate at a user level - the display of acceptance rate at the user level is to prevent the potential misinterpretation of this metric as an indicator of individual productivity. e.g. a user has low acceptance rate and lower [insert other productivity metric], so they need to use the tool better...which is the wrong conclusion to
epic