Skip to content

[testify-expert] Improve Test Quality: pkg/agentdrain/anomaly_test.go #24469

@github-actions

Description

@github-actions

The test file pkg/agentdrain/anomaly_test.go has been selected for quality improvement by the Testify Uber Super Expert. This issue provides specific, actionable recommendations to enhance test quality, coverage, and maintainability using testify best practices.

Current State

  • Test File: pkg/agentdrain/anomaly_test.go
  • Source File: pkg/agentdrain/anomaly.go
  • Test Functions: 5 test functions
  • Lines of Code: 123 lines (test), 85 lines (source)
  • Package: agentdrain

Test Quality Analysis

Strengths ✅

  • Good scenario coverage: four distinct anomaly scenarios are tested (new template, low similarity, rare cluster, normal)
  • Integration test TestAnalyzeEvent verifies end-to-end Miner.AnalyzeEvent behavior, including first-occurrence and repeat-occurrence semantics
  • Build tag //go:build !integration is correctly present at the top of the file
🎯 Areas for Improvement

1. Replace Plain t.Error/t.Fatal with Testify Assertions

Current Issues:
The entire test file uses raw Go testing primitives (t.Error, t.Errorf, t.Fatal, t.Fatalf) instead of testify assertions, which is inconsistent with the rest of the codebase.

Examples from the file:

// ❌ CURRENT (anti-pattern)
if !report.IsNewTemplate {
    t.Error("expected IsNewTemplate=true")
}
if report.AnomalyScore <= 0 {
    t.Errorf("expected positive anomaly score for new template, got %v", report.AnomalyScore)
}
if err != nil {
    t.Fatalf("NewMiner: %v", err)
}
if result == nil {
    t.Fatal("AnalyzeEvent: expected non-nil result")
}

Recommended Changes:

// ✅ IMPROVED (testify)
assert.True(t, report.IsNewTemplate, "Analyze with isNew=true should set IsNewTemplate")
assert.Positive(t, report.AnomalyScore, "new template should produce a positive anomaly score")

require.NoError(t, err, "NewMiner should succeed with default config")
require.NotNil(t, result, "AnalyzeEvent should return non-nil match result")

Why this matters: Testify assertions produce richer failure output, are the standard used throughout this codebase (see scratchpad/testing.md), and require.* variants stop the test immediately on critical setup failures without extra if boilerplate.


2. Consolidate Separate TestAnomalyDetection_* Functions into a Table-Driven Test

Current Issues:
The four functions TestAnomalyDetection_NewTemplate, TestAnomalyDetection_LowSimilarity, TestAnomalyDetection_RareCluster, and TestAnomalyDetection_Normal each construct a nearly identical AnomalyDetector and Cluster, differing only in their inputs and expected flags.

Recommended Changes:

// ✅ IMPROVED - Table-driven test for AnomalyDetector.Analyze
func TestAnomalyDetector_Analyze(t *testing.T) {
    tests := []struct {
        name              string
        isNew             bool
        clusterSize       int
        similarity        float64
        wantIsNew         bool
        wantNewCluster    bool
        wantLowSimilarity bool
        wantRareCluster   bool
        wantPositiveScore bool
        wantReason        string
    }{
        {
            name:           "new template creates cluster",
            isNew:          true,
            clusterSize:    1,
            similarity:     1.0,
            wantIsNew:      true,
            wantNewCluster: true,
            wantPositiveScore: true,
        },
        {
            name:              "low similarity below threshold",
            isNew:             false,
            clusterSize:       5,
            similarity:        0.2,
            wantLowSimilarity: true,
            wantPositiveScore: true,
        },
        {
            name:            "rare cluster (size <= rareThreshold)",
            isNew:           false,
            clusterSize:     1,
            similarity:      0.9,
            wantRareCluster: true,
            wantPositiveScore: true,
        },
        {
            name:        "normal: high similarity, common cluster",
            isNew:       false,
            clusterSize: 100,
            similarity:  0.9,
            wantReason:  "no anomaly detected",
        },
        // New boundary case: exactly at threshold
        {
            name:        "boundary: similarity exactly at threshold is not low",
            isNew:       false,
            clusterSize: 5,
            similarity:  0.4, // equal to threshold, not below
        },
        // New combined case: low similarity + rare cluster
        {
            name:              "combined: low similarity and rare cluster",
            isNew:             false,
            clusterSize:       1,
            similarity:        0.1,
            wantLowSimilarity: true,
            wantRareCluster:   true,
            wantPositiveScore: true,
        },
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            d := NewAnomalyDetector(0.4, 2)
            c := &Cluster{ID: 1, Template: []string{"a"}, Size: tt.clusterSize}
            result := &MatchResult{ClusterID: 1, Similarity: tt.similarity}

            report := d.Analyze(result, tt.isNew, c)
            require.NotNil(t, report, "Analyze should return a non-nil report")

            assert.Equal(t, tt.wantIsNew, report.IsNewTemplate, "IsNewTemplate mismatch")
            assert.Equal(t, tt.wantNewCluster, report.NewClusterCreated, "NewClusterCreated mismatch")
            assert.Equal(t, tt.wantLowSimilarity, report.LowSimilarity, "LowSimilarity mismatch")
            assert.Equal(t, tt.wantRareCluster, report.RareCluster, "RareCluster mismatch")
            if tt.wantPositiveScore {
                assert.Positive(t, report.AnomalyScore, "expected positive anomaly score")
            } else {
                assert.Zero(t, report.AnomalyScore, "expected zero anomaly score for normal event")
            }
            if tt.wantReason != "" {
                assert.Equal(t, tt.wantReason, report.Reason, "Reason mismatch")
            }
        })
    }
}

Why this matters: Table-driven tests are easier to extend with new scenarios, reduce code duplication, and follow the pattern recommended in scratchpad/testing.md.


3. Missing Test Coverage: Boundary Conditions and Combined Anomalies

Currently not tested:

  1. Boundary: similarity == threshold — The < check in anomaly.go means similarity == 0.4 should NOT trigger LowSimilarity. This boundary isn't tested.
  2. Boundary: clusterSize == rareThreshold — The <= check means size == 2 (the rareThreshold) SHOULD trigger RareCluster. Not tested.
  3. Combined anomalies — A case where both LowSimilarity and RareCluster are true simultaneously is not tested. The anomaly score formula 0.7 + 0.3 = 1.0 (normalized max) should be verified.
  4. Nil clusteranomaly.go guards against cluster == nil with if cluster != nil. This defensive check is untested.
  5. Anomaly score formula — The maximum score (1.0 + 0.7 + 0.3) / 2.0 = 1.0 for all three flags set simultaneously is not tested.

Priority test additions:

{
    name:        "nil cluster does not panic",
    isNew:       false,
    cluster:     nil,
    similarity:  0.5,
},
{
    name:        "all three flags: max score = 1.0",
    isNew:       true,
    clusterSize: 1,
    similarity:  0.1,
    wantIsNew:      true,
    wantLowSimilarity: true,
    wantRareCluster:   true,
    wantScore:   1.0,
},

4. TestAnalyzeEvent: Replace Manual Checks with Testify

Current code:

m, err := NewMiner(cfg)
if err != nil {
    t.Fatalf("NewMiner: %v", err)
}
if result == nil {
    t.Fatal("AnalyzeEvent: expected non-nil result")
}
if !report.IsNewTemplate {
    t.Error("first event should be a new template")
}

Recommended:

m, err := NewMiner(cfg)
require.NoError(t, err, "NewMiner should succeed with default config")

result, report, err := m.AnalyzeEvent(evt)
require.NoError(t, err, "AnalyzeEvent should not return an error")
require.NotNil(t, result, "AnalyzeEvent should return a non-nil match result")
require.NotNil(t, report, "AnalyzeEvent should return a non-nil anomaly report")
assert.True(t, report.IsNewTemplate, "first occurrence should be flagged as a new template")

5. Missing Assertion Messages

All assertions should include a context message explaining what is being verified. This makes CI output actionable without having to read the test source.

// ❌ CURRENT - no context
assert.True(t, report.RareCluster)

// ✅ IMPROVED - context on failure
assert.True(t, report.RareCluster, "cluster with size=1 should be rare when rareThreshold=2")
📋 Implementation Guidelines

Required Imports

import (
    "testing"

    "github.com/stretchr/testify/assert"
    "github.com/stretchr/testify/require"
)

Priority Order

  1. High: Replace all t.Error/t.Fatal calls with testify (assert.* / require.*)
  2. High: Merge the four separate TestAnomalyDetection_* functions into one table-driven test
  3. Medium: Add missing boundary and combined-anomaly test cases
  4. Medium: Add nil-cluster defensive check test
  5. Low: Add assertion messages to every assert.* call

Best Practices from scratchpad/testing.md

  • ✅ Use require.* for critical setup (stops test on failure)
  • ✅ Use assert.* for test validations (continues checking)
  • ✅ Write table-driven tests with t.Run() and descriptive names
  • ✅ No mocks or test suites - test real component interactions
  • ✅ Always include helpful assertion messages

Testing Commands

# Run tests for this package
go test -v ./pkg/agentdrain/ -run TestAnomalyDetector

# Run all agentdrain tests
go test -v ./pkg/agentdrain/

# Run all unit tests
make test-unit

Acceptance Criteria

  • All t.Error/t.Errorf/t.Fatal/t.Fatalf calls replaced with assert.* / require.*
  • TestAnomalyDetection_NewTemplate, _LowSimilarity, _RareCluster, _Normal merged into one table-driven TestAnomalyDetector_Analyze
  • New test cases added: boundary conditions, combined anomalies, nil cluster, max score
  • TestAnalyzeEvent updated to use testify
  • All assertions include descriptive messages
  • Tests pass: go test -v ./pkg/agentdrain/
  • Code follows patterns in scratchpad/testing.md

Additional Context

  • Repository Testing Guidelines: See scratchpad/testing.md for comprehensive testing patterns
  • Example Tests: Look at pkg/workflow/*_test.go for examples of good testify usage
  • Testify Documentation: https://github.com/stretchr/testify

Priority: Medium
Effort: Small (mostly mechanical conversions with a few new test cases)
Expected Impact: Improved test quality, better CI error messages, easier maintenance, and new coverage of boundary/combined-anomaly scenarios

Files Involved:

  • Test file: pkg/agentdrain/anomaly_test.go
  • Source file: pkg/agentdrain/anomaly.go

References:

Generated by Daily Testify Uber Super Expert · ● 3.2M ·

  • expires on Apr 6, 2026, 11:36 AM UTC

Metadata

Metadata

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions