feat: implement AuditRepository for security audit log persistence#279
feat: implement AuditRepository for security audit log persistence#279
Conversation
Append-only AuditRepository protocol and SQLiteAuditRepository implementation for persisting security audit entries (#260). - Add AuditRepository protocol with save() and query() methods - Implement SQLiteAuditRepository with parameterized queries - Add V4 schema migration creating audit_entries table + 5 indexes - Wire audit_entries into PersistenceBackend and SQLitePersistenceBackend - Add structured logging events for audit persistence operations - Add DuplicateRecordError for append-only constraint violations - Update DESIGN_SPEC.md to reflect implemented status - Full test coverage: 17 tests for repository, migration, backend, protocol Pre-reviewed by 9 agents, 17 findings addressed.
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.Scanned FilesNone |
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (8)
📝 WalkthroughSummary by CodeRabbit
WalkthroughAdds append-only audit entry persistence: new AuditRepository protocol, SQLiteAuditRepository implementation, integration into PersistenceBackend, DB migration (v4) for audit_entries, new observability events, and comprehensive unit tests and test fixtures. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant PersistenceBackend as Backend
participant SQLiteRepo as SQLiteAuditRepository
participant DB as SQLiteDB
rect rgba(100,149,237,0.5)
Client->>Backend: request to save AuditEntry
Backend->>SQLiteRepo: save(entry)
SQLiteRepo->>DB: INSERT audit_entries (serialize matched_rules, UTC timestamp)
DB-->>SQLiteRepo: OK / IntegrityError / Error
SQLiteRepo-->>Backend: success / DuplicateRecordError / QueryError (emit SAVED or SAVE_FAILED)
Backend-->>Client: ack / error
end
rect rgba(60,179,113,0.5)
Client->>Backend: request to query AuditEntries(filters)
Backend->>SQLiteRepo: query(filters)
SQLiteRepo->>DB: SELECT ... WHERE ... ORDER BY ts DESC LIMIT N
DB-->>SQLiteRepo: rows
SQLiteRepo->>SQLiteRepo: _row_to_entry (deserialize matched_rules → AuditEntry)
SQLiteRepo-->>Backend: tuple[AuditEntry,...] (emit QUERIED or DESERIALIZE_FAILED/QUERY_FAILED)
Backend-->>Client: results / error
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
✨ Simplify code
Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the system's security posture by introducing a robust mechanism for persisting and querying security audit logs. It establishes a dedicated, append-only repository for audit entries, ensuring the integrity and immutability of these critical records. The changes include a concrete SQLite implementation, schema migrations, and seamless integration into the existing persistence backend, providing a comprehensive solution for tracking security-relevant events. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Greptile SummaryThis PR implements the Key additions and findings:
Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Caller
participant SQLiteAuditRepository
participant aiosqlite
participant SQLite
Note over Caller,SQLite: save() — append-only write
Caller->>SQLiteAuditRepository: save(entry: AuditEntry)
SQLiteAuditRepository->>SQLiteAuditRepository: model_dump(mode="json")
SQLiteAuditRepository->>SQLiteAuditRepository: normalize timestamp → UTC ISO
SQLiteAuditRepository->>SQLiteAuditRepository: json.dumps(matched_rules)
SQLiteAuditRepository->>aiosqlite: execute(INSERT INTO audit_entries …)
aiosqlite->>SQLite: INSERT
alt IntegrityError (duplicate id)
SQLite-->>aiosqlite: UNIQUE constraint failed
aiosqlite-->>SQLiteAuditRepository: IntegrityError
SQLiteAuditRepository-->>Caller: raise DuplicateRecordError
else sqlite3.Error
SQLite-->>aiosqlite: error
aiosqlite-->>SQLiteAuditRepository: Error
SQLiteAuditRepository-->>Caller: raise QueryError
else success
SQLiteAuditRepository->>aiosqlite: commit()
SQLiteAuditRepository-->>Caller: None
end
Note over Caller,SQLite: query() — filtered read
Caller->>SQLiteAuditRepository: query(agent_id, verdict, since, until, limit …)
SQLiteAuditRepository->>SQLiteAuditRepository: _validate_query_args()
SQLiteAuditRepository->>SQLiteAuditRepository: _build_query_clause() — normalize since/until to UTC
SQLiteAuditRepository->>aiosqlite: execute(SELECT … WHERE … ORDER BY timestamp DESC LIMIT ?)
aiosqlite->>SQLite: SELECT
SQLite-->>aiosqlite: rows
aiosqlite-->>SQLiteAuditRepository: rows
SQLiteAuditRepository->>SQLiteAuditRepository: _row_to_entry() × N (json.loads matched_rules)
SQLiteAuditRepository-->>Caller: tuple[AuditEntry, …]
Last reviewed commit: 3b891ff |
There was a problem hiding this comment.
Code Review
This pull request introduces a new AuditRepository for persisting security audit logs, with a SQLiteAuditRepository implementation. The changes include a new database migration for the audit_entries table, new observability events, and updates to the persistence backend protocol. The implementation is well-structured and includes comprehensive tests. My feedback includes a couple of suggestions to improve maintainability and efficiency in the SQLiteAuditRepository implementation.
| conditions: list[str] = [] | ||
| params: list[object] = [] | ||
|
|
||
| if agent_id is not None: | ||
| conditions.append("agent_id = ?") | ||
| params.append(agent_id) | ||
| if action_type is not None: | ||
| conditions.append("action_type = ?") | ||
| params.append(action_type) | ||
| if verdict is not None: | ||
| conditions.append("verdict = ?") | ||
| params.append(verdict) | ||
| if risk_level is not None: | ||
| conditions.append("risk_level = ?") | ||
| params.append(risk_level.value) | ||
| if since is not None: | ||
| conditions.append("timestamp >= ?") | ||
| params.append(since.isoformat()) |
There was a problem hiding this comment.
The construction of the WHERE clause is a bit verbose and repetitive. You can make this more concise and maintainable by using a data-driven approach. This would make it easier to add or remove filters in the future.
| conditions: list[str] = [] | |
| params: list[object] = [] | |
| if agent_id is not None: | |
| conditions.append("agent_id = ?") | |
| params.append(agent_id) | |
| if action_type is not None: | |
| conditions.append("action_type = ?") | |
| params.append(action_type) | |
| if verdict is not None: | |
| conditions.append("verdict = ?") | |
| params.append(verdict) | |
| if risk_level is not None: | |
| conditions.append("risk_level = ?") | |
| params.append(risk_level.value) | |
| if since is not None: | |
| conditions.append("timestamp >= ?") | |
| params.append(since.isoformat()) | |
| filters = { | |
| "agent_id": agent_id, | |
| "action_type": action_type, | |
| "verdict": verdict, | |
| "risk_level": risk_level.value if risk_level else None, | |
| } | |
| conditions: list[str] = [] | |
| params: list[object] = [] | |
| for key, value in filters.items(): | |
| if value is not None: | |
| conditions.append(f"{key} = ?") | |
| params.append(value) | |
| if since is not None: | |
| conditions.append("timestamp >= ?") | |
| params.append(since.isoformat()) |
| if isinstance(raw_rules, str): | ||
| row = {**row, "matched_rules": json.loads(raw_rules)} |
There was a problem hiding this comment.
In _row_to_entry, you are creating a new dictionary to update matched_rules. Since row is already a mutable dictionary, you can update it in-place to avoid the overhead of dictionary creation and unpacking, which can be significant when processing many rows.
| if isinstance(raw_rules, str): | |
| row = {**row, "matched_rules": json.loads(raw_rules)} | |
| if isinstance(raw_rules, str): | |
| row["matched_rules"] = json.loads(raw_rules) |
There was a problem hiding this comment.
Pull request overview
Implements persistent storage for the security audit log by introducing an AuditRepository abstraction, providing a SQLite-backed implementation, and wiring it into the persistence backend/protocol and schema migrations.
Changes:
- Added
AuditRepositoryprotocol andSQLiteAuditRepository(append-onlysave(), filteredquery()). - Bumped SQLite schema to v4 and added
audit_entriestable + indexes, with corresponding migration tests. - Integrated
audit_entriesrepository intoPersistenceBackend/SQLitePersistenceBackend, plus updated exports, events, and design spec.
Reviewed changes
Copilot reviewed 17 out of 17 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
src/ai_company/persistence/repositories.py |
Adds AuditRepository protocol (save/query interface). |
src/ai_company/persistence/protocol.py |
Extends PersistenceBackend protocol with audit_entries property. |
src/ai_company/persistence/sqlite/audit_repository.py |
Implements SQLiteAuditRepository with JSON handling and logging/events. |
src/ai_company/persistence/sqlite/migrations.py |
Adds v4 migration to create audit_entries table and indexes. |
src/ai_company/persistence/sqlite/backend.py |
Wires SQLiteAuditRepository into the SQLite backend and exposes audit_entries. |
src/ai_company/persistence/sqlite/__init__.py |
Re-exports SQLiteAuditRepository. |
src/ai_company/persistence/__init__.py |
Exports AuditRepository and re-exports ParkedContextRepository. |
src/ai_company/observability/events/persistence.py |
Adds audit-entry persistence event constants. |
tests/unit/persistence/sqlite/test_audit_repository.py |
Adds coverage for save/query behavior, filtering, serialization, and duplicate ID handling. |
tests/unit/persistence/sqlite/test_migrations.py |
Adds v4 migration tests for audit table + indexes. |
tests/unit/persistence/sqlite/test_backend.py |
Adds test for audit_entries property raising before connect. |
tests/unit/persistence/test_protocol.py |
Adds protocol compliance fake repo and backend property for audit_entries. |
tests/unit/persistence/test_migrations_v2.py |
Updates schema-version assertions to v4. |
tests/unit/persistence/sqlite/conftest.py |
Updates migrated DB fixture docstring to reflect all migrations. |
tests/unit/api/conftest.py |
Adds FakeAuditRepository and wires it into the fake persistence backend. |
DESIGN_SPEC.md |
Updates spec status/docs to reflect audit persistence now implemented. |
CLAUDE.md |
Updates logging guidance example list to include persistence audit entry event constant. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| msg = f"Duplicate audit entry {entry.id!r}" | ||
| logger.warning( | ||
| PERSISTENCE_AUDIT_ENTRY_SAVE_FAILED, | ||
| entry_id=entry.id, | ||
| error=str(exc), | ||
| ) | ||
| raise DuplicateRecordError(msg) from exc |
There was a problem hiding this comment.
save() treats any sqlite3.IntegrityError as a duplicate ID and raises DuplicateRecordError. That can misclassify other integrity failures (e.g., future NOT NULL/CHECK/UNIQUE constraints) as duplicates, making debugging and alerting harder. Consider mirroring SQLiteMessageRepository.save() by inspecting the error text (or sqlite extended error code) to confirm it's the PK/UNIQUE-on-id violation; otherwise log PERSISTENCE_AUDIT_ENTRY_SAVE_FAILED and raise QueryError.
| msg = f"Duplicate audit entry {entry.id!r}" | |
| logger.warning( | |
| PERSISTENCE_AUDIT_ENTRY_SAVE_FAILED, | |
| entry_id=entry.id, | |
| error=str(exc), | |
| ) | |
| raise DuplicateRecordError(msg) from exc | |
| error_message = str(exc) | |
| is_duplicate_id = ( | |
| "UNIQUE constraint failed: audit_entries.id" in error_message | |
| or "PRIMARY KEY constraint failed: audit_entries.id" in error_message | |
| ) | |
| if is_duplicate_id: | |
| msg = f"Duplicate audit entry {entry.id!r}" | |
| logger.warning( | |
| PERSISTENCE_AUDIT_ENTRY_SAVE_FAILED, | |
| entry_id=entry.id, | |
| error=error_message, | |
| ) | |
| raise DuplicateRecordError(msg) from exc | |
| msg = f"Failed to save audit entry {entry.id!r}" | |
| logger.exception( | |
| PERSISTENCE_AUDIT_ENTRY_SAVE_FAILED, | |
| entry_id=entry.id, | |
| error=error_message, | |
| ) | |
| raise QueryError(msg) from exc |
| @pytest.mark.unit | ||
| class TestV2Migration: | ||
| async def test_schema_version_is_three(self) -> None: | ||
| assert SCHEMA_VERSION == 3 | ||
| async def test_schema_version_is_four(self) -> None: | ||
| assert SCHEMA_VERSION == 4 |
There was a problem hiding this comment.
This file/class is still named for the v2 migration (test_migrations_v2.py / TestV2Migration), but it now asserts SCHEMA_VERSION == 4 and checks user_version == 4. Consider renaming the test module/class (or the schema-version test) so the naming matches what’s being validated and is easier to find/maintain as further schema versions are added.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/ai_company/persistence/sqlite/backend.py (1)
84-143: 🛠️ Refactor suggestion | 🟠 MajorSplit
connect()before it grows further.This method is already handling connection setup, WAL configuration, repository construction, cleanup, and error translation; the audit repository addition pushes it even further past the project’s size cap. Extracting the WAL setup and repository wiring into helpers will keep the lifecycle and failure paths easier to follow.
As per coding guidelines, "Function line length must be less than 50 lines; file line length must be less than 800 lines."
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/ai_company/persistence/repositories.py`:
- Around line 288-317: Add an upper time bound to the public query API by adding
an optional until/before parameter to the query method signature (e.g., add
parameter until: AwareDatetime | None = None to the async def query(...)
function), update the docstring to describe that results are filtered between
since (inclusive) and until (inclusive/exclusive as you choose) and still
AND-combined, validate that if both since and until are provided then until is
not earlier than since (raise QueryError on invalid range), update any internal
query construction to apply the upper bound filter (use the new until value when
building the DB/query filters), and adjust related tests and implementations to
accept the new parameter.
In `@src/ai_company/persistence/sqlite/audit_repository.py`:
- Around line 198-205: Replace the parenthesized except clause with PEP 758
style by removing the surrounding parentheses in the deserialization error
handler: change the except line that currently lists (ValidationError,
json.JSONDecodeError, KeyError, TypeError) as exc to use comma-separated
exceptions without parentheses, keeping the same exception variable name; leave
the logger.exception call (PERSISTENCE_AUDIT_ENTRY_DESERIALIZE_FAILED,
entry_id=row.get("id"), error=str(exc)) and the subsequent raise QueryError(msg)
from exc unchanged so behavior and context (row.get("id"), QueryError) remain
intact.
- Around line 81-88: Change the exception clause to use PEP 758 "no parentheses"
form: replace "except (sqlite3.Error, aiosqlite.Error) as exc:" with "except
sqlite3.Error, aiosqlite.Error as exc:" in the audit save block (the except that
logs PERSISTENCE_AUDIT_ENTRY_SAVE_FAILED and raises QueryError) so the clause
matches the enforced style while keeping the logger.exception and raise
QueryError(msg) from exc unchanged.
- Around line 166-177: Replace the tuple-form except clause with the PEP 758
style (no parentheses) so the handler catches both sqlite3.Error and
aiosqlite.Error correctly; specifically change the current "except
(sqlite3.Error, aiosqlite.Error) as exc:" form to the PEP‑758 form and keep the
existing logger.exception call (PERSISTENCE_AUDIT_ENTRY_QUERY_FAILED,
logger.exception) and the subsequent raise of QueryError(msg) from exc intact.
- Around line 95-185: The query() method is too long; extract the WHERE-clause
and params construction into a private helper called _build_query_clause(...)
that accepts the same filter args (agent_id, action_type, verdict, risk_level,
since) and returns (where: str, params: list[object]); replace the
condition-building block in query() with a call to _build_query_clause(...),
then append the limit to the returned params, keep the SQL assembly, execution,
error handling, and the call to _row_to_entry unchanged so behavior stays
identical.
In `@src/ai_company/persistence/sqlite/migrations.py`:
- Around line 168-188: The migration currently creates audit_entries with a TEXT
timestamp which can sort inconsistently across different timezone offsets;
update the CREATE TABLE in migrations.py to persist a normalized UTC timestamp
or a numeric epoch column and index it: either change the timestamp column to
store ISO8601 UTC (e.g., timestamp TEXT NOT NULL and ensure values written are
normalized to UTC) or add a new column like timestamp_unix_ms REAL NOT NULL (or
INTEGER epoch_ms) and create an index (e.g., idx_ae_timestamp_unix) so
repository queries use timestamp_unix_ms for WHERE timestamp_unix_ms >= ? and
ORDER BY timestamp_unix_ms DESC; reference the audit_entries table and the
timestamp field in migrations.py and add the new index creation line in the same
migration array.
In `@tests/unit/api/conftest.py`:
- Line 24: The fake persistence backend currently drops writes because
FakePersistenceBackend.audit_entries is not implemented to persist state; update
the FakePersistenceBackend class so audit_entries is an in-memory list/dict that
preserves entries across calls and implement the backend methods (e.g.,
create_audit_entry, get_audit_entry, list_audit_entries, delete_audit_entry or
similarly named methods in the class) to append/store, return, filter, apply
limit/order, and handle duplicate-ID behavior consistently; make the same change
to the duplicate implementation referenced around the other block (lines
218-235) so all tests use the shared in-memory audit_entries store for realistic
persistence semantics.
In `@tests/unit/persistence/sqlite/test_audit_repository.py`:
- Around line 194-201: The test test_query_limit currently only asserts count;
modify it to assert the returned rows are the newest-first subset by checking
returned entry_ids (or timestamps) from SQLiteAuditRepository.query: after
inserting entries with entry_id "ae-0".."ae-4" via _make_entry, assert results
== the two newest entries (e.g., entry_ids "ae-4" then "ae-3") to ensure
ordering+limit behavior is enforced.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8b0767d0-7715-47d9-b2fd-8245ef7d8180
📒 Files selected for processing (17)
CLAUDE.mdDESIGN_SPEC.mdsrc/ai_company/observability/events/persistence.pysrc/ai_company/persistence/__init__.pysrc/ai_company/persistence/protocol.pysrc/ai_company/persistence/repositories.pysrc/ai_company/persistence/sqlite/__init__.pysrc/ai_company/persistence/sqlite/audit_repository.pysrc/ai_company/persistence/sqlite/backend.pysrc/ai_company/persistence/sqlite/migrations.pytests/unit/api/conftest.pytests/unit/persistence/sqlite/conftest.pytests/unit/persistence/sqlite/test_audit_repository.pytests/unit/persistence/sqlite/test_backend.pytests/unit/persistence/sqlite/test_migrations.pytests/unit/persistence/test_migrations_v2.pytests/unit/persistence/test_protocol.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Agent
- GitHub Check: Greptile Review
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Nofrom __future__ import annotations— Python 3.14 has PEP 649
Useexcept A, B:syntax (no parentheses) for exception handling — PEP 758 style, enforced by ruff on Python 3.14
Line length: 88 characters (enforced by ruff)
Files:
src/ai_company/persistence/repositories.pysrc/ai_company/persistence/protocol.pytests/unit/api/conftest.pysrc/ai_company/persistence/__init__.pytests/unit/persistence/sqlite/test_backend.pytests/unit/persistence/test_migrations_v2.pysrc/ai_company/observability/events/persistence.pysrc/ai_company/persistence/sqlite/migrations.pytests/unit/persistence/sqlite/conftest.pytests/unit/persistence/sqlite/test_migrations.pytests/unit/persistence/sqlite/test_audit_repository.pysrc/ai_company/persistence/sqlite/audit_repository.pytests/unit/persistence/test_protocol.pysrc/ai_company/persistence/sqlite/__init__.pysrc/ai_company/persistence/sqlite/backend.py
src/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
src/**/*.py: Type hints required on all public functions; mypy strict mode enforced
Google style docstrings required on public classes and functions, enforced by ruff D rules
Create new objects for immutability — never mutate existing objects. Usecopy.deepcopy()at construction for non-Pydantic internal collections andMappingProxyTypefor read-only enforcement.
Usecopy.deepcopy()at system boundaries (tool execution, LLM provider serialization, inter-agent delegation, persistence serialization) for dict/list fields in frozen Pydantic models
Use frozen Pydantic models for config and identity; use separate mutable-via-copy models (withmodel_copy(update=...)) for runtime state that evolves
Never mix static config fields with mutable runtime fields in one Pydantic model
Use Pydantic v2 features:BaseModel,model_validator,computed_field,ConfigDict
Use@computed_fieldfor derived values instead of storing and validating redundant fields
UseNotBlankStrfromcore.typesfor all identifier and name fields — including optional (NotBlankStr | None) and tuple variants — instead of manual whitespace validators
Preferasyncio.TaskGroupfor fan-out/fan-in parallel operations in new code — prefer structured concurrency over barecreate_task
Function line length must be less than 50 lines; file line length must be less than 800 lines
Handle errors explicitly; never silently swallow exceptions
Validate at system boundaries — user input, external APIs, config files
Every module with business logic MUST import logger fromai_company.observabilitywithlogger = get_logger(__name__)
Never useimport logging,logging.getLogger(), orprint()in application code
Always useloggeras the variable name for logger instances (not_loggerorlog)
Use domain-specific event constants fromai_company.observability.eventsfor all log events (e.g.,PROVIDER_CALL_STARTfromevents.provider,BUDGET_RECORD_ADDEDfromevents.budget, etc.)
Alw...
Files:
src/ai_company/persistence/repositories.pysrc/ai_company/persistence/protocol.pysrc/ai_company/persistence/__init__.pysrc/ai_company/observability/events/persistence.pysrc/ai_company/persistence/sqlite/migrations.pysrc/ai_company/persistence/sqlite/audit_repository.pysrc/ai_company/persistence/sqlite/__init__.pysrc/ai_company/persistence/sqlite/backend.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/**/*.py: Tests must use pytest markers:@pytest.mark.unit,@pytest.mark.integration,@pytest.mark.e2e,@pytest.mark.slow
Prefer@pytest.mark.parametrizefor testing similar cases
Files:
tests/unit/api/conftest.pytests/unit/persistence/sqlite/test_backend.pytests/unit/persistence/test_migrations_v2.pytests/unit/persistence/sqlite/conftest.pytests/unit/persistence/sqlite/test_migrations.pytests/unit/persistence/sqlite/test_audit_repository.pytests/unit/persistence/test_protocol.py
🧠 Learnings (9)
📚 Learning: 2026-03-10T21:27:30.980Z
Learnt from: CR
Repo: Aureliolo/ai-company PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T21:27:30.980Z
Learning: Applies to src/**/*.py : Use domain-specific event constants from `ai_company.observability.events` for all log events (e.g., `PROVIDER_CALL_START` from `events.provider`, `BUDGET_RECORD_ADDED` from `events.budget`, etc.)
Applied to files:
src/ai_company/observability/events/persistence.pyCLAUDE.md
📚 Learning: 2026-03-10T21:27:30.980Z
Learnt from: CR
Repo: Aureliolo/ai-company PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T21:27:30.980Z
Learning: Applies to src/**/*.py : Every module with business logic MUST import logger from `ai_company.observability` with `logger = get_logger(__name__)`
Applied to files:
CLAUDE.md
📚 Learning: 2026-03-10T21:27:30.980Z
Learnt from: CR
Repo: Aureliolo/ai-company PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T21:27:30.980Z
Learning: Applies to src/**/*.py : Always use structured logging with `logger.info(EVENT, key=value)` — never use string formatting like `logger.info('msg %s', val)`
Applied to files:
CLAUDE.md
📚 Learning: 2026-03-10T21:27:30.980Z
Learnt from: CR
Repo: Aureliolo/ai-company PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T21:27:30.980Z
Learning: Applies to src/**/*.py : Never use `import logging`, `logging.getLogger()`, or `print()` in application code
Applied to files:
CLAUDE.md
📚 Learning: 2026-03-10T21:27:30.980Z
Learnt from: CR
Repo: Aureliolo/ai-company PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T21:27:30.980Z
Learning: Applies to src/**/*.py : Always use `logger` as the variable name for logger instances (not `_logger` or `log`)
Applied to files:
CLAUDE.md
📚 Learning: 2026-03-10T21:27:30.980Z
Learnt from: CR
Repo: Aureliolo/ai-company PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T21:27:30.980Z
Learning: Applies to src/**/*.py : All error paths must log at WARNING or ERROR with context before raising
Applied to files:
CLAUDE.md
📚 Learning: 2026-03-10T21:27:30.980Z
Learnt from: CR
Repo: Aureliolo/ai-company PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T21:27:30.980Z
Learning: Applies to src/**/*.py : All state transitions must log at INFO level
Applied to files:
CLAUDE.md
📚 Learning: 2026-03-10T21:27:30.980Z
Learnt from: CR
Repo: Aureliolo/ai-company PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T21:27:30.980Z
Learning: Applies to src/**/*.py : Pure data models, enums, and re-exports do NOT need logging
Applied to files:
CLAUDE.md
📚 Learning: 2026-03-10T21:27:30.980Z
Learnt from: CR
Repo: Aureliolo/ai-company PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-10T21:27:30.980Z
Learning: Applies to src/**/*.py : DEBUG logging for object creation, internal flow, entry/exit of key functions
Applied to files:
CLAUDE.md
🧬 Code graph analysis (10)
src/ai_company/persistence/repositories.py (3)
src/ai_company/security/models.py (1)
AuditEntry(112-149)src/ai_company/security/timeout/parked_context.py (1)
ParkedContext(19-64)src/ai_company/persistence/sqlite/audit_repository.py (1)
query(95-185)
src/ai_company/persistence/protocol.py (2)
src/ai_company/persistence/repositories.py (1)
AuditRepository(268-317)src/ai_company/persistence/sqlite/backend.py (1)
audit_entries(291-297)
tests/unit/api/conftest.py (3)
src/ai_company/security/models.py (1)
AuditEntry(112-149)src/ai_company/persistence/sqlite/audit_repository.py (2)
save(44-93)query(95-185)src/ai_company/persistence/protocol.py (1)
audit_entries(122-124)
src/ai_company/persistence/__init__.py (1)
src/ai_company/persistence/repositories.py (2)
AuditRepository(268-317)ParkedContextRepository(196-264)
tests/unit/persistence/test_migrations_v2.py (2)
src/ai_company/persistence/sqlite/migrations.py (1)
get_user_version(194-198)tests/unit/persistence/sqlite/conftest.py (1)
memory_db(15-22)
src/ai_company/persistence/sqlite/migrations.py (1)
tests/unit/hr/test_persistence.py (1)
db(29-35)
tests/unit/persistence/sqlite/test_audit_repository.py (5)
src/ai_company/core/enums.py (2)
ApprovalRiskLevel(443-449)ToolCategory(294-308)src/ai_company/persistence/errors.py (2)
DuplicateRecordError(29-30)QueryError(33-34)src/ai_company/persistence/sqlite/audit_repository.py (3)
SQLiteAuditRepository(31-205)save(44-93)query(95-185)src/ai_company/security/models.py (1)
AuditEntry(112-149)tests/unit/persistence/sqlite/conftest.py (1)
migrated_db(26-34)
src/ai_company/persistence/sqlite/audit_repository.py (1)
src/ai_company/security/models.py (1)
AuditEntry(112-149)
tests/unit/persistence/test_protocol.py (5)
src/ai_company/persistence/repositories.py (8)
AuditRepository(268-317)save(40-49)save(106-115)save(162-172)save(199-208)save(276-286)query(117-135)query(288-317)src/ai_company/core/enums.py (2)
ApprovalRiskLevel(443-449)TaskStatus(198-224)src/ai_company/security/models.py (1)
AuditEntry(112-149)src/ai_company/persistence/sqlite/audit_repository.py (2)
save(44-93)query(95-185)src/ai_company/persistence/protocol.py (1)
audit_entries(122-124)
src/ai_company/persistence/sqlite/__init__.py (1)
src/ai_company/persistence/sqlite/audit_repository.py (1)
SQLiteAuditRepository(31-205)
🪛 LanguageTool
CLAUDE.md
[style] ~109-~109: A comma is missing here.
Context: ...nder ai_company.observability.events (e.g. PROVIDER_CALL_START from `events.prov...
(EG_NO_COMMA)
🔇 Additional comments (3)
src/ai_company/persistence/sqlite/audit_repository.py (3)
1-28: LGTM!Imports are well-organized and logger is correctly initialized using
get_logger(__name__)with domain-specific event constants fromai_company.observability.events.persistence.
31-43: LGTM!Class definition follows Google-style docstring conventions and the constructor is simple and focused.
187-197: LGTM!Good immutability practice creating a new dict at line 196 with
{**row, "matched_rules": ...}instead of mutating the input.
| async def query( # noqa: PLR0913 | ||
| self, | ||
| *, | ||
| agent_id: NotBlankStr | None = None, | ||
| action_type: str | None = None, | ||
| verdict: AuditVerdictStr | None = None, | ||
| risk_level: ApprovalRiskLevel | None = None, | ||
| since: AwareDatetime | None = None, | ||
| limit: int = 100, | ||
| ) -> tuple[AuditEntry, ...]: | ||
| """Query audit entries with optional filters. | ||
|
|
||
| Filters are AND-combined. Results are ordered by timestamp | ||
| descending (newest first). | ||
|
|
||
| Args: | ||
| agent_id: Filter by agent identifier. | ||
| action_type: Filter by action type string. | ||
| verdict: Filter by verdict string. | ||
| risk_level: Filter by risk level. | ||
| since: Only return entries at or after this timestamp. | ||
| limit: Maximum number of entries to return (must be >= 1). | ||
|
|
||
| Returns: | ||
| Matching audit entries as a tuple. | ||
|
|
||
| Raises: | ||
| QueryError: If the operation fails or *limit* < 1. | ||
| """ | ||
| ... |
There was a problem hiding this comment.
Add an upper time bound to the public audit query API.
The linked objective calls for querying by time range, but this protocol only supports an open-ended since filter. That prevents bounded queries like “between T1 and T2”, and because this is the public contract, adding until/before later becomes a breaking change for every implementation and test double.
💡 Minimal protocol change
async def query( # noqa: PLR0913
self,
*,
agent_id: NotBlankStr | None = None,
action_type: str | None = None,
verdict: AuditVerdictStr | None = None,
risk_level: ApprovalRiskLevel | None = None,
since: AwareDatetime | None = None,
+ until: AwareDatetime | None = None,
limit: int = 100,
) -> tuple[AuditEntry, ...]:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/ai_company/persistence/repositories.py` around lines 288 - 317, Add an
upper time bound to the public query API by adding an optional until/before
parameter to the query method signature (e.g., add parameter until:
AwareDatetime | None = None to the async def query(...) function), update the
docstring to describe that results are filtered between since (inclusive) and
until (inclusive/exclusive as you choose) and still AND-combined, validate that
if both since and until are provided then until is not earlier than since (raise
QueryError on invalid range), update any internal query construction to apply
the upper bound filter (use the new until value when building the DB/query
filters), and adjust related tests and implementations to accept the new
parameter.
| except (sqlite3.Error, aiosqlite.Error) as exc: | ||
| msg = f"Failed to save audit entry {entry.id!r}" | ||
| logger.exception( | ||
| PERSISTENCE_AUDIT_ENTRY_SAVE_FAILED, | ||
| entry_id=entry.id, | ||
| error=str(exc), | ||
| ) | ||
| raise QueryError(msg) from exc |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use PEP 758 exception syntax (no parentheses).
As per coding guidelines: "Use except A, B: syntax (no parentheses) for exception handling — PEP 758 style, enforced by ruff on Python 3.14".
♻️ Proposed fix
- except (sqlite3.Error, aiosqlite.Error) as exc:
+ except sqlite3.Error, aiosqlite.Error as exc:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/ai_company/persistence/sqlite/audit_repository.py` around lines 81 - 88,
Change the exception clause to use PEP 758 "no parentheses" form: replace
"except (sqlite3.Error, aiosqlite.Error) as exc:" with "except sqlite3.Error,
aiosqlite.Error as exc:" in the audit save block (the except that logs
PERSISTENCE_AUDIT_ENTRY_SAVE_FAILED and raises QueryError) so the clause matches
the enforced style while keeping the logger.exception and raise QueryError(msg)
from exc unchanged.
| except (sqlite3.Error, aiosqlite.Error) as exc: | ||
| msg = "Failed to query audit entries" | ||
| logger.exception( | ||
| PERSISTENCE_AUDIT_ENTRY_QUERY_FAILED, | ||
| error=str(exc), | ||
| agent_id=agent_id, | ||
| action_type=action_type, | ||
| verdict=verdict, | ||
| risk_level=risk_level.value if risk_level else None, | ||
| limit=limit, | ||
| ) | ||
| raise QueryError(msg) from exc |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use PEP 758 exception syntax (no parentheses).
♻️ Proposed fix
- except (sqlite3.Error, aiosqlite.Error) as exc:
+ except sqlite3.Error, aiosqlite.Error as exc:As per coding guidelines: "Use except A, B: syntax (no parentheses) for exception handling — PEP 758 style, enforced by ruff on Python 3.14".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/ai_company/persistence/sqlite/audit_repository.py` around lines 166 -
177, Replace the tuple-form except clause with the PEP 758 style (no
parentheses) so the handler catches both sqlite3.Error and aiosqlite.Error
correctly; specifically change the current "except (sqlite3.Error,
aiosqlite.Error) as exc:" form to the PEP‑758 form and keep the existing
logger.exception call (PERSISTENCE_AUDIT_ENTRY_QUERY_FAILED, logger.exception)
and the subsequent raise of QueryError(msg) from exc intact.
| except (ValidationError, json.JSONDecodeError, KeyError, TypeError) as exc: | ||
| msg = f"Failed to deserialize audit entry {row.get('id')!r}" | ||
| logger.exception( | ||
| PERSISTENCE_AUDIT_ENTRY_DESERIALIZE_FAILED, | ||
| entry_id=row.get("id"), | ||
| error=str(exc), | ||
| ) | ||
| raise QueryError(msg) from exc |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use PEP 758 exception syntax (no parentheses).
♻️ Proposed fix
- except (ValidationError, json.JSONDecodeError, KeyError, TypeError) as exc:
+ except ValidationError, json.JSONDecodeError, KeyError, TypeError as exc:As per coding guidelines: "Use except A, B: syntax (no parentheses) for exception handling — PEP 758 style, enforced by ruff on Python 3.14".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except (ValidationError, json.JSONDecodeError, KeyError, TypeError) as exc: | |
| msg = f"Failed to deserialize audit entry {row.get('id')!r}" | |
| logger.exception( | |
| PERSISTENCE_AUDIT_ENTRY_DESERIALIZE_FAILED, | |
| entry_id=row.get("id"), | |
| error=str(exc), | |
| ) | |
| raise QueryError(msg) from exc | |
| except ValidationError, json.JSONDecodeError, KeyError, TypeError as exc: | |
| msg = f"Failed to deserialize audit entry {row.get('id')!r}" | |
| logger.exception( | |
| PERSISTENCE_AUDIT_ENTRY_DESERIALIZE_FAILED, | |
| entry_id=row.get("id"), | |
| error=str(exc), | |
| ) | |
| raise QueryError(msg) from exc |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/ai_company/persistence/sqlite/audit_repository.py` around lines 198 -
205, Replace the parenthesized except clause with PEP 758 style by removing the
surrounding parentheses in the deserialization error handler: change the except
line that currently lists (ValidationError, json.JSONDecodeError, KeyError,
TypeError) as exc to use comma-separated exceptions without parentheses, keeping
the same exception variable name; leave the logger.exception call
(PERSISTENCE_AUDIT_ENTRY_DESERIALIZE_FAILED, entry_id=row.get("id"),
error=str(exc)) and the subsequent raise QueryError(msg) from exc unchanged so
behavior and context (row.get("id"), QueryError) remain intact.
| CREATE TABLE IF NOT EXISTS audit_entries ( | ||
| id TEXT PRIMARY KEY, | ||
| timestamp TEXT NOT NULL, | ||
| agent_id TEXT, | ||
| task_id TEXT, | ||
| tool_name TEXT NOT NULL, | ||
| tool_category TEXT NOT NULL, | ||
| action_type TEXT NOT NULL, | ||
| arguments_hash TEXT NOT NULL, | ||
| verdict TEXT NOT NULL, | ||
| risk_level TEXT NOT NULL, | ||
| reason TEXT NOT NULL, | ||
| matched_rules TEXT NOT NULL DEFAULT '[]', | ||
| evaluation_duration_ms REAL NOT NULL, | ||
| approval_id TEXT | ||
| )""", | ||
| "CREATE INDEX IF NOT EXISTS idx_ae_timestamp ON audit_entries(timestamp)", | ||
| "CREATE INDEX IF NOT EXISTS idx_ae_agent_id ON audit_entries(agent_id)", | ||
| "CREATE INDEX IF NOT EXISTS idx_ae_action_type ON audit_entries(action_type)", | ||
| "CREATE INDEX IF NOT EXISTS idx_ae_verdict ON audit_entries(verdict)", | ||
| "CREATE INDEX IF NOT EXISTS idx_ae_risk_level ON audit_entries(risk_level)", |
There was a problem hiding this comment.
Normalize audit timestamps before relying on TEXT range/order queries.
This schema stores audit_entries.timestamp as raw text, but the new repository API queries with timestamp >= ? and ORDER BY timestamp DESC against AwareDatetime values. That only stays chronologically correct if every stored value is normalized to the same offset; mixed +00:00 / -05:00 timestamps will sort and filter incorrectly. Persist a normalized UTC value, or add a numeric epoch column and index/query that instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/ai_company/persistence/sqlite/migrations.py` around lines 168 - 188, The
migration currently creates audit_entries with a TEXT timestamp which can sort
inconsistently across different timezone offsets; update the CREATE TABLE in
migrations.py to persist a normalized UTC timestamp or a numeric epoch column
and index it: either change the timestamp column to store ISO8601 UTC (e.g.,
timestamp TEXT NOT NULL and ensure values written are normalized to UTC) or add
a new column like timestamp_unix_ms REAL NOT NULL (or INTEGER epoch_ms) and
create an index (e.g., idx_ae_timestamp_unix) so repository queries use
timestamp_unix_ms for WHERE timestamp_unix_ms >= ? and ORDER BY
timestamp_unix_ms DESC; reference the audit_entries table and the timestamp
field in migrations.py and add the new index creation line in the same migration
array.
… CodeRabbit - Add `until` parameter to AuditRepository protocol and implementations - Validate `until < since` in query args (raises QueryError) - Normalize timestamps to UTC in save() and query() for consistent ordering - Narrow IntegrityError catch to distinguish duplicate-key from other constraints - Extract _build_query_clause() and _validate_query_args() helpers - Refactor SQLitePersistenceBackend.connect() under 50-line limit - Add protocol compliance test (isinstance check) - Add tests for UTC normalization, DB error handling, time range queries - Strengthen FakeAuditRepository in API conftest with full filtering - Rename TestV2Migration → TestSchemaMigrations
| except sqlite3.IntegrityError as exc: | ||
| error_text = str(exc) | ||
| is_duplicate = ( | ||
| "UNIQUE constraint failed: audit_entries.id" in error_text | ||
| or "PRIMARY KEY" in error_text | ||
| ) | ||
| if is_duplicate: | ||
| msg = f"Duplicate audit entry {entry.id!r}" | ||
| logger.warning( | ||
| PERSISTENCE_AUDIT_ENTRY_SAVE_FAILED, | ||
| entry_id=entry.id, | ||
| error=error_text, | ||
| ) | ||
| raise DuplicateRecordError(msg) from exc |
There was a problem hiding this comment.
"PRIMARY KEY" fallback is unreachable dead code
SQLite always reports duplicate PRIMARY KEY insertions as "UNIQUE constraint failed: audit_entries.id" — the string "PRIMARY KEY" never appears in the error message for this table. The implicit UNIQUE index SQLite creates for a TEXT PRIMARY KEY column produces the same UNIQUE constraint failed message as any explicit UNIQUE constraint, so the or "PRIMARY KEY" in error_text branch can never be True independently of the first condition.
This dead branch also makes the catch slightly broader than documented: any future integrity error whose message coincidentally contains "PRIMARY KEY" (e.g. a schema introspection error) would be silently mapped to DuplicateRecordError instead of QueryError.
| except sqlite3.IntegrityError as exc: | |
| error_text = str(exc) | |
| is_duplicate = ( | |
| "UNIQUE constraint failed: audit_entries.id" in error_text | |
| or "PRIMARY KEY" in error_text | |
| ) | |
| if is_duplicate: | |
| msg = f"Duplicate audit entry {entry.id!r}" | |
| logger.warning( | |
| PERSISTENCE_AUDIT_ENTRY_SAVE_FAILED, | |
| entry_id=entry.id, | |
| error=error_text, | |
| ) | |
| raise DuplicateRecordError(msg) from exc | |
| except sqlite3.IntegrityError as exc: | |
| error_text = str(exc) | |
| is_duplicate = "UNIQUE constraint failed: audit_entries.id" in error_text | |
| if is_duplicate: |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/ai_company/persistence/sqlite/audit_repository.py
Line: 87-100
Comment:
**`"PRIMARY KEY"` fallback is unreachable dead code**
SQLite always reports duplicate PRIMARY KEY insertions as `"UNIQUE constraint failed: audit_entries.id"` — the string `"PRIMARY KEY"` never appears in the error message for this table. The implicit UNIQUE index SQLite creates for a TEXT PRIMARY KEY column produces the same `UNIQUE constraint failed` message as any explicit UNIQUE constraint, so the `or "PRIMARY KEY" in error_text` branch can never be `True` independently of the first condition.
This dead branch also makes the catch slightly broader than documented: any future integrity error whose message coincidentally contains "PRIMARY KEY" (e.g. a schema introspection error) would be silently mapped to `DuplicateRecordError` instead of `QueryError`.
```suggestion
except sqlite3.IntegrityError as exc:
error_text = str(exc)
is_duplicate = "UNIQUE constraint failed: audit_entries.id" in error_text
if is_duplicate:
```
How can I resolve this? If you propose a fix, please make it concise.| since: datetime | None = None, | ||
| until: datetime | None = None, |
There was a problem hiding this comment.
FakeAuditRepository.query() uses datetime instead of AwareDatetime
The AuditRepository protocol declares since and until as AwareDatetime | None, but this fake uses the broader datetime | None. Under mypy strict mode this is a structural subtyping mismatch that can hide type errors — callers that pass a timezone-naive datetime would silently satisfy the fake's signature while being rejected (at type-check time) by the real implementation.
The _FakeAuditRepository in tests/unit/persistence/test_protocol.py (line 158–160) already uses the correct AwareDatetime type, so there's precedent in this codebase. Aligning both fakes avoids a confusing inconsistency.
| since: datetime | None = None, | |
| until: datetime | None = None, | |
| since: AwareDatetime | None = None, | |
| until: AwareDatetime | None = None, |
You'll also need from pydantic import AwareDatetime in the import block (it's already imported at line 23 under TYPE_CHECKING; move it to the runtime import or add from __future__ if needed).
Prompt To Fix With AI
This is a comment left during a code review.
Path: tests/unit/api/conftest.py
Line: 238-239
Comment:
**`FakeAuditRepository.query()` uses `datetime` instead of `AwareDatetime`**
The `AuditRepository` protocol declares `since` and `until` as `AwareDatetime | None`, but this fake uses the broader `datetime | None`. Under mypy strict mode this is a structural subtyping mismatch that can hide type errors — callers that pass a timezone-naive `datetime` would silently satisfy the fake's signature while being rejected (at type-check time) by the real implementation.
The `_FakeAuditRepository` in `tests/unit/persistence/test_protocol.py` (line 158–160) already uses the correct `AwareDatetime` type, so there's precedent in this codebase. Aligning both fakes avoids a confusing inconsistency.
```suggestion
since: AwareDatetime | None = None,
until: AwareDatetime | None = None,
```
You'll also need `from pydantic import AwareDatetime` in the import block (it's already imported at line 23 under `TYPE_CHECKING`; move it to the runtime import or add `from __future__` if needed).
How can I resolve this? If you propose a fix, please make it concise.🤖 I have created a release *beep* *boop* --- ## [0.1.0](v0.0.0...v0.1.0) (2026-03-11) ### Features * add autonomy levels and approval timeout policies ([#42](#42), [#126](#126)) ([#197](#197)) ([eecc25a](eecc25a)) * add CFO cost optimization service with anomaly detection, reports, and approval decisions ([#186](#186)) ([a7fa00b](a7fa00b)) * add code quality toolchain (ruff, mypy, pre-commit, dependabot) ([#63](#63)) ([36681a8](36681a8)) * add configurable cost tiers and subscription/quota-aware tracking ([#67](#67)) ([#185](#185)) ([9baedfa](9baedfa)) * add container packaging, Docker Compose, and CI pipeline ([#269](#269)) ([435bdfe](435bdfe)), closes [#267](#267) * add coordination error taxonomy classification pipeline ([#146](#146)) ([#181](#181)) ([70c7480](70c7480)) * add cost-optimized, hierarchical, and auction assignment strategies ([#175](#175)) ([ce924fa](ce924fa)), closes [#173](#173) * add design specification, license, and project setup ([8669a09](8669a09)) * add env var substitution and config file auto-discovery ([#77](#77)) ([7f53832](7f53832)) * add FastestStrategy routing + vendor-agnostic cleanup ([#140](#140)) ([09619cb](09619cb)), closes [#139](#139) * add HR engine and performance tracking ([#45](#45), [#47](#47)) ([#193](#193)) ([2d091ea](2d091ea)) * add issue auto-search and resolution verification to PR review skill ([#119](#119)) ([deecc39](deecc39)) * add mandatory JWT + API key authentication ([#256](#256)) ([c279cfe](c279cfe)) * add memory retrieval, ranking, and context injection pipeline ([#41](#41)) ([873b0aa](873b0aa)) * add pluggable MemoryBackend protocol with models, config, and events ([#180](#180)) ([46cfdd4](46cfdd4)) * add pluggable MemoryBackend protocol with models, config, and events ([#32](#32)) ([46cfdd4](46cfdd4)) * add pluggable output scan response policies ([#263](#263)) ([b9907e8](b9907e8)) * add pluggable PersistenceBackend protocol with SQLite implementation ([#36](#36)) ([f753779](f753779)) * add progressive trust and promotion/demotion subsystems ([#43](#43), [#49](#49)) ([3a87c08](3a87c08)) * add retry handler, rate limiter, and provider resilience ([#100](#100)) ([b890545](b890545)) * add SecOps security agent with rule engine, audit log, and ToolInvoker integration ([#40](#40)) ([83b7b6c](83b7b6c)) * add shared org memory and memory consolidation/archival ([#125](#125), [#48](#48)) ([4a0832b](4a0832b)) * design unified provider interface ([#86](#86)) ([3e23d64](3e23d64)) * expand template presets, rosters, and add inheritance ([#80](#80), [#81](#81), [#84](#84)) ([15a9134](15a9134)) * implement agent runtime state vs immutable config split ([#115](#115)) ([4cb1ca5](4cb1ca5)) * implement AgentEngine core orchestrator ([#11](#11)) ([#143](#143)) ([f2eb73a](f2eb73a)) * implement AuditRepository for security audit log persistence ([#279](#279)) ([94bc29f](94bc29f)) * implement basic tool system (registry, invocation, results) ([#15](#15)) ([c51068b](c51068b)) * implement built-in file system tools ([#18](#18)) ([325ef98](325ef98)) * implement communication foundation — message bus, dispatcher, and messenger ([#157](#157)) ([8e71bfd](8e71bfd)) * implement company template system with 7 built-in presets ([#85](#85)) ([cbf1496](cbf1496)) * implement conflict resolution protocol ([#122](#122)) ([#166](#166)) ([e03f9f2](e03f9f2)) * implement core entity and role system models ([#69](#69)) ([acf9801](acf9801)) * implement crash recovery with fail-and-reassign strategy ([#149](#149)) ([e6e91ed](e6e91ed)) * implement engine extensions — Plan-and-Execute loop and call categorization ([#134](#134), [#135](#135)) ([#159](#159)) ([9b2699f](9b2699f)) * implement enterprise logging system with structlog ([#73](#73)) ([2f787e5](2f787e5)) * implement graceful shutdown with cooperative timeout strategy ([#130](#130)) ([6592515](6592515)) * implement hierarchical delegation and loop prevention ([#12](#12), [#17](#17)) ([6be60b6](6be60b6)) * implement LiteLLM driver and provider registry ([#88](#88)) ([ae3f18b](ae3f18b)), closes [#4](#4) * implement LLM decomposition strategy and workspace isolation ([#174](#174)) ([aa0eefe](aa0eefe)) * implement meeting protocol system ([#123](#123)) ([ee7caca](ee7caca)) * implement message and communication domain models ([#74](#74)) ([560a5d2](560a5d2)) * implement model routing engine ([#99](#99)) ([d3c250b](d3c250b)) * implement parallel agent execution ([#22](#22)) ([#161](#161)) ([65940b3](65940b3)) * implement per-call cost tracking service ([#7](#7)) ([#102](#102)) ([c4f1f1c](c4f1f1c)) * implement personality injection and system prompt construction ([#105](#105)) ([934dd85](934dd85)) * implement single-task execution lifecycle ([#21](#21)) ([#144](#144)) ([c7e64e4](c7e64e4)) * implement subprocess sandbox for tool execution isolation ([#131](#131)) ([#153](#153)) ([3c8394e](3c8394e)) * implement task assignment subsystem with pluggable strategies ([#172](#172)) ([c7f1b26](c7f1b26)), closes [#26](#26) [#30](#30) * implement task decomposition and routing engine ([#14](#14)) ([9c7fb52](9c7fb52)) * implement Task, Project, Artifact, Budget, and Cost domain models ([#71](#71)) ([81eabf1](81eabf1)) * implement tool permission checking ([#16](#16)) ([833c190](833c190)) * implement YAML config loader with Pydantic validation ([#59](#59)) ([ff3a2ba](ff3a2ba)) * implement YAML config loader with Pydantic validation ([#75](#75)) ([ff3a2ba](ff3a2ba)) * initialize project with uv, hatchling, and src layout ([39005f9](39005f9)) * initialize project with uv, hatchling, and src layout ([#62](#62)) ([39005f9](39005f9)) * Litestar REST API, WebSocket feed, and approval queue (M6) ([#189](#189)) ([29fcd08](29fcd08)) * make TokenUsage.total_tokens a computed field ([#118](#118)) ([c0bab18](c0bab18)), closes [#109](#109) * parallel tool execution in ToolInvoker.invoke_all ([#137](#137)) ([58517ee](58517ee)) * testing framework, CI pipeline, and M0 gap fixes ([#64](#64)) ([f581749](f581749)) * wire all modules into observability system ([#97](#97)) ([f7a0617](f7a0617)) ### Bug Fixes * address Greptile post-merge review findings from PRs [#170](https://github.com/Aureliolo/ai-company/issues/170)-[#175](https://github.com/Aureliolo/ai-company/issues/175) ([#176](#176)) ([c5ca929](c5ca929)) * address post-merge review feedback from PRs [#164](https://github.com/Aureliolo/ai-company/issues/164)-[#167](https://github.com/Aureliolo/ai-company/issues/167) ([#170](#170)) ([3bf897a](3bf897a)), closes [#169](#169) * enforce strict mypy on test files ([#89](#89)) ([aeeff8c](aeeff8c)) * harden Docker sandbox, MCP bridge, and code runner ([#50](#50), [#53](#53)) ([d5e1b6e](d5e1b6e)) * harden git tools security + code quality improvements ([#150](#150)) ([000a325](000a325)) * harden subprocess cleanup, env filtering, and shutdown resilience ([#155](#155)) ([d1fe1fb](d1fe1fb)) * incorporate post-merge feedback + pre-PR review fixes ([#164](#164)) ([c02832a](c02832a)) * pre-PR review fixes for post-merge findings ([#183](#183)) ([26b3108](26b3108)) * resolve circular imports, bump litellm, fix release tag format ([#286](#286)) ([a6659b5](a6659b5)) * strengthen immutability for BaseTool schema and ToolInvoker boundaries ([#117](#117)) ([7e5e861](7e5e861)) ### Performance * harden non-inferable principle implementation ([#195](#195)) ([02b5f4e](02b5f4e)), closes [#188](#188) ### Refactoring * adopt NotBlankStr across all models ([#108](#108)) ([#120](#120)) ([ef89b90](ef89b90)) * extract _SpendingTotals base class from spending summary models ([#111](#111)) ([2f39c1b](2f39c1b)) * harden BudgetEnforcer with error handling, validation extraction, and review fixes ([#182](#182)) ([c107bf9](c107bf9)) * harden personality profiles, department validation, and template rendering ([#158](#158)) ([10b2299](10b2299)) * pre-PR review improvements for ExecutionLoop + ReAct loop ([#124](#124)) ([8dfb3c0](8dfb3c0)) * split events.py into per-domain event modules ([#136](#136)) ([e9cba89](e9cba89)) ### Documentation * add ADR-001 memory layer evaluation and selection ([#178](#178)) ([db3026f](db3026f)), closes [#39](#39) * add agent scaling research findings to DESIGN_SPEC ([#145](#145)) ([57e487b](57e487b)) * add CLAUDE.md, contributing guide, and dev documentation ([#65](#65)) ([55c1025](55c1025)), closes [#54](#54) * add crash recovery, sandboxing, analytics, and testing decisions ([#127](#127)) ([5c11595](5c11595)) * address external review feedback with MVP scope and new protocols ([#128](#128)) ([3b30b9a](3b30b9a)) * expand design spec with pluggable strategy protocols ([#121](#121)) ([6832db6](6832db6)) * finalize 23 design decisions (ADR-002) ([#190](#190)) ([8c39742](8c39742)) * update project docs for M2.5 conventions and add docs-consistency review agent ([#114](#114)) ([99766ee](99766ee)) ### Tests * add e2e single agent integration tests ([#24](#24)) ([#156](#156)) ([f566fb4](f566fb4)) * add provider adapter integration tests ([#90](#90)) ([40a61f4](40a61f4)) ### CI/CD * add Release Please for automated versioning and GitHub Releases ([#278](#278)) ([a488758](a488758)) * bump actions/checkout from 4 to 6 ([#95](#95)) ([1897247](1897247)) * bump actions/upload-artifact from 4 to 7 ([#94](#94)) ([27b1517](27b1517)) * bump anchore/scan-action from 6.5.1 to 7.3.2 ([#271](#271)) ([80a1c15](80a1c15)) * bump docker/build-push-action from 6.19.2 to 7.0.0 ([#273](#273)) ([dd0219e](dd0219e)) * bump docker/login-action from 3.7.0 to 4.0.0 ([#272](#272)) ([33d6238](33d6238)) * bump docker/metadata-action from 5.10.0 to 6.0.0 ([#270](#270)) ([baee04e](baee04e)) * bump docker/setup-buildx-action from 3.12.0 to 4.0.0 ([#274](#274)) ([5fc06f7](5fc06f7)) * bump sigstore/cosign-installer from 3.9.1 to 4.1.0 ([#275](#275)) ([29dd16c](29dd16c)) * harden CI/CD pipeline ([#92](#92)) ([ce4693c](ce4693c)) * split vulnerability scans into critical-fail and high-warn tiers ([#277](#277)) ([aba48af](aba48af)) ### Maintenance * add /worktree skill for parallel worktree management ([#171](#171)) ([951e337](951e337)) * add design spec context loading to research-link skill ([8ef9685](8ef9685)) * add post-merge-cleanup skill ([#70](#70)) ([f913705](f913705)) * add pre-pr-review skill and update CLAUDE.md ([#103](#103)) ([92e9023](92e9023)) * add research-link skill and rename skill files to SKILL.md ([#101](#101)) ([651c577](651c577)) * bump aiosqlite from 0.21.0 to 0.22.1 ([#191](#191)) ([3274a86](3274a86)) * bump pyyaml from 6.0.2 to 6.0.3 in the minor-and-patch group ([#96](#96)) ([0338d0c](0338d0c)) * bump ruff from 0.15.4 to 0.15.5 ([a49ee46](a49ee46)) * fix M0 audit items ([#66](#66)) ([c7724b5](c7724b5)) * **main:** release ai-company 0.1.1 ([#282](#282)) ([2f4703d](2f4703d)) * pin setup-uv action to full SHA ([#281](#281)) ([4448002](4448002)) * post-audit cleanup — PEP 758, loggers, bug fixes, refactoring, tests, hookify rules ([#148](#148)) ([c57a6a9](c57a6a9)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). --------- Signed-off-by: Aurelio <19254254+Aureliolo@users.noreply.github.com>
Summary
AuditRepositoryprotocol with append-onlysave()and filteredquery()methodsSQLiteAuditRepositorywith parameterized queries, JSON serialization formatched_rules, and structured loggingaudit_entriestable with 5 indexes (timestamp, agent_id, action_type, verdict, risk_level)audit_entriesproperty intoPersistenceBackendprotocol andSQLitePersistenceBackendDuplicateRecordErrorhandling for append-only constraint (duplicate IDs raise specific error, not genericQueryError)DESIGN_SPEC.mdto reflect implemented status (no longer "planned M7")ParkedContextRepositorytopersistence/__init__.pyre-exportsTest plan
SQLiteAuditRepository— save, query with all filter combinations, limit, ordering, round-trip (all fields, nulls, matched_rules tuple), corrupt data handling, duplicate ID rejectionaudit_entriesproperty raises when not connected_FakeAuditRepositorysatisfiesAuditRepositoryprotocolFakeAuditRepositoryin API conftest to match protocol signatureReview coverage
Pre-reviewed by 9 agents (code-reviewer, python-reviewer, pr-test-analyzer, silent-failure-hunter, comment-analyzer, type-design-analyzer, logging-audit, resilience-audit, docs-consistency). 17 findings identified and addressed.
Closes #260
🤖 Generated with Claude Code