Conversation
- decisions.go: check error from memory INSERT in RecordDecision - store.go: fix GetRecentExchanges ordering (subquery for DESC+ASC) - store.go: sanitizeFTS now keeps single-char words instead of dropping - store.go: GetByCategory includes _global project (matches SearchFTS)
📝 WalkthroughWalkthroughThe memory system is enhanced to support global scope visibility across all projects. GetByCategory and GetTopMemories now include '_global' project memories in their results. GetRecentExchanges is refactored with subqueries to ensure chronological ordering. sanitizeFTS expands token validation to accept single-character words. Error handling is added to decision memory recording with prefixed error messages. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
internal/memory/decisions.go (1)
33-49:⚠️ Potential issue | 🟠 MajorMake the decision insert and memory insert atomic.
The new error check is better, but these writes still happen outside a transaction. If the memory INSERT fails, this returns an error after the decision row has already been committed, so retries can create duplicate decisions while the memory table stays out of sync.
🧩 Wrap both writes in one transaction
func (s *Store) RecordDecision(ctx context.Context, projectID, title, decision, rationale string, alternatives, tags []string) (string, error) { s.mu.Lock() defer s.mu.Unlock() altJSON, _ := json.Marshal(alternatives) tagJSON, _ := json.Marshal(tags) + tx, err := s.db.BeginTx(ctx, nil) + if err != nil { + return "", fmt.Errorf("begin decision tx: %w", err) + } + defer tx.Rollback() + var id string - err := s.db.QueryRowContext(ctx, ` + err = tx.QueryRowContext(ctx, ` INSERT INTO decisions (project_id, title, decision, alternatives, rationale, tags) VALUES (?, ?, ?, ?, ?, ?) RETURNING id `, projectID, title, decision, string(altJSON), rationale, string(tagJSON)).Scan(&id) if err != nil { return "", fmt.Errorf("record decision: %w", err) } // Also save as a memory so it shows up in search and prompt context. content := fmt.Sprintf("%s: %s. Rationale: %s", title, decision, rationale) - if _, err := s.db.ExecContext(ctx, ` + if _, err := tx.ExecContext(ctx, ` INSERT INTO memories (project_id, category, content, source, importance, tags) VALUES (?, 'decision', ?, 'decision_log', 0.9, ?) `, projectID, content, string(tagJSON)); err != nil { - return id, fmt.Errorf("record decision memory: %w", err) + return "", fmt.Errorf("record decision memory: %w", err) + } + if err := tx.Commit(); err != nil { + return "", fmt.Errorf("commit decision tx: %w", err) } if s.onSave != nil { s.onSave(projectID) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/memory/decisions.go` around lines 33 - 49, Wrap the decision INSERT and the subsequent memory INSERT in a single DB transaction so both succeed or both roll back: start a transaction with s.db.BeginTx(ctx, nil), perform the decisions INSERT via tx.QueryRowContext(...).Scan(&id) (using projectID, title, decision, string(altJSON), rationale, string(tagJSON)), then perform the memories INSERT via tx.ExecContext(...) (using projectID, content, string(tagJSON)); on any error call tx.Rollback() and return the wrapped error, and on success call tx.Commit() before returning id. Ensure you replace s.db.QueryRowContext/ExecContext with tx.QueryRowContext/tx.ExecContext and still return the same error messages (e.g., "record decision: %w" and "record decision memory: %w").internal/memory/store.go (1)
505-533:⚠️ Potential issue | 🟠 MajorDon't rebuild exchanges from a flattened message window.
Because the subquery drops
conversation_id, the loop can only pair by adjacency. That breaks if conversations overlap, and it also under-fillslimitwhen the newest window starts with an unanswered user turn. Those bad pairs flow straight intointernal/reflection/engine.go:74-99andinternal/reflection/prompt.go:35-52. Keep conversation identity through the query and apply the final limit after complete user→assistant pairs are formed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/memory/store.go` around lines 505 - 533, The current SQL drops conversation_id which forces pairing by adjacency; change the query to SELECT conversation_id, role, content, created_at (so you retain conversation identity) and then in the rows loop use a map keyed by conversation_id to build per-conversation [2]string user→assistant pairs (use created_at to order within each conversation), collect all completed pairs across conversations, sort them by their assistant message created_at (or user created_at depending on desired semantics) and only then apply the final limit before returning; update the variables and logic around pairs/current to reference conversation_id and ensure empty user starts are ignored so you don’t under-fill results consumed by internal/reflection/engine.go and internal/reflection/prompt.go.
🧹 Nitpick comments (2)
internal/memory/store_test.go (1)
355-357: Consider an end-to-endSearchFTSregression for this case.This locks down the rewritten MATCH string, but not the user-visible search behavior. A small DB-backed test that stores content with
"a"or"i"and queries it throughSearchFTSwould catch tokenizer/query-shape regressions too.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/memory/store_test.go` around lines 355 - 357, Add an end-to-end DB-backed test that verifies SearchFTS returns rows for single-character tokens (e.g., "a" and "i") to catch tokenizer/query-shape regressions: create a new test (e.g., TestSearchFTS_SingleChar) in the same test suite that inserts records with content containing single-char words, calls SearchFTS with queries like "a" and "i", and asserts the expected rows are returned; use the existing test helpers/setup in internal/memory/store_test.go so the test exercises the actual DB MATCH path rather than only asserting the rewritten MATCH string.internal/memory/store.go (1)
292-299: Prefer project-local memories before_globalwhen this list is limited.Once
_globalis included here,ORDER BY importance DESC, created_at DESC LIMIT ?can return only global rows for a dense category and hide the project's own memories.internal/mcpserver/mcpserver.go:266-272renders this result set as-is, so a small limit will make that pretty visible.♻️ Possible ordering tweak
- ORDER BY importance DESC, created_at DESC + ORDER BY CASE WHEN project_id = ? THEN 0 ELSE 1 END, + importance DESC, created_at DESC LIMIT ? - `, projectID, category, limit) + `, projectID, category, projectID, limit)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/memory/store.go` around lines 292 - 299, The current SQL in the s.db.QueryContext call can return only _global rows when limit is small; change the ORDER BY to prefer project-local memories first by adding a deterministic project-local sort key before importance. Update the query used in s.db.QueryContext to ORDER BY (project_id = '_global') ASC, importance DESC, created_at DESC (so local project_id rows sort before '_global'), leaving the parameters (projectID, category, limit) unchanged; this ensures project-local memories are returned before global ones when the result is limited.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@internal/memory/decisions.go`:
- Around line 33-49: Wrap the decision INSERT and the subsequent memory INSERT
in a single DB transaction so both succeed or both roll back: start a
transaction with s.db.BeginTx(ctx, nil), perform the decisions INSERT via
tx.QueryRowContext(...).Scan(&id) (using projectID, title, decision,
string(altJSON), rationale, string(tagJSON)), then perform the memories INSERT
via tx.ExecContext(...) (using projectID, content, string(tagJSON)); on any
error call tx.Rollback() and return the wrapped error, and on success call
tx.Commit() before returning id. Ensure you replace
s.db.QueryRowContext/ExecContext with tx.QueryRowContext/tx.ExecContext and
still return the same error messages (e.g., "record decision: %w" and "record
decision memory: %w").
In `@internal/memory/store.go`:
- Around line 505-533: The current SQL drops conversation_id which forces
pairing by adjacency; change the query to SELECT conversation_id, role, content,
created_at (so you retain conversation identity) and then in the rows loop use a
map keyed by conversation_id to build per-conversation [2]string user→assistant
pairs (use created_at to order within each conversation), collect all completed
pairs across conversations, sort them by their assistant message created_at (or
user created_at depending on desired semantics) and only then apply the final
limit before returning; update the variables and logic around pairs/current to
reference conversation_id and ensure empty user starts are ignored so you don’t
under-fill results consumed by internal/reflection/engine.go and
internal/reflection/prompt.go.
---
Nitpick comments:
In `@internal/memory/store_test.go`:
- Around line 355-357: Add an end-to-end DB-backed test that verifies SearchFTS
returns rows for single-character tokens (e.g., "a" and "i") to catch
tokenizer/query-shape regressions: create a new test (e.g.,
TestSearchFTS_SingleChar) in the same test suite that inserts records with
content containing single-char words, calls SearchFTS with queries like "a" and
"i", and asserts the expected rows are returned; use the existing test
helpers/setup in internal/memory/store_test.go so the test exercises the actual
DB MATCH path rather than only asserting the rewritten MATCH string.
In `@internal/memory/store.go`:
- Around line 292-299: The current SQL in the s.db.QueryContext call can return
only _global rows when limit is small; change the ORDER BY to prefer
project-local memories first by adding a deterministic project-local sort key
before importance. Update the query used in s.db.QueryContext to ORDER BY
(project_id = '_global') ASC, importance DESC, created_at DESC (so local
project_id rows sort before '_global'), leaving the parameters (projectID,
category, limit) unchanged; this ensures project-local memories are returned
before global ones when the result is limited.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7a055b7c-47a3-492a-a860-10b7f39632bf
📒 Files selected for processing (3)
internal/memory/decisions.gointernal/memory/store.gointernal/memory/store_test.go
Summary
_globalproject memories (matchesGetTopMemoriesandSearchFTS)Issue 3 (SearchVector loads all embeddings) left as-is — already project-filtered via JOIN, fine at current scale.
Test plan
go test ./internal/memory/...— passes (updated test expectation for sanitizeFTS)go vet ./...— cleango build ./cmd/ghost/— compilesSummary by CodeRabbit
New Features
Bug Fixes