Skip to content

[webhooks] file-backed payload storage#337

Merged
capcom6 merged 1 commit intomasterfrom
codex/fix-large-mms-attachment-issue
Apr 14, 2026
Merged

[webhooks] file-backed payload storage#337
capcom6 merged 1 commit intomasterfrom
codex/fix-large-mms-attachment-issue

Conversation

@capcom6
Copy link
Copy Markdown
Owner

@capcom6 capcom6 commented Apr 1, 2026

Motivation

  • Avoid large in-DB payloads causing CursorWindow/SQLiteBlobTooBigException by moving webhook payloads out of row materialization and loading them lazily.
  • Reduce memory/DB pressure when batching webhook queue reads by only fetching payloads when about to send.
  • Provide a short-term file-backed storage approach and document a longer-term plan for handling large MMS attachments.

Description

  • Added WebhookPayloadStorage which stores webhook JSON payloads to context.filesDir/webhook_payloads and exposes save, read, and delete methods; payload references use the file: prefix.
  • Registered WebhookPayloadStorage in the DI module (webhooksModule).
  • Updated SendWebhookWorker.start to save the serialized payload with WebhookPayloadStorage.save(...) and enqueue the returned file reference instead of inline JSON.
  • Added WebhookQueueRepository.getById accessor used to fetch full queue rows when needed.
  • Updated WebhookQueueProcessorWorker to inject WebhookPayloadStorage, lazily read the payload for each webhook before sending, and delete the stored file on successful send or when a webhook becomes permanently failed; updated failure handling to fetch the latest row status with getById.
  • Added a new design document docs/plans/mms-large-attachments-plan.md describing the problem, long-term migration options, and test/rollout plans for large MMS attachment handling.

Testing

  • No automated tests were added or executed as part of this change; integration and unit tests should be run by CI to validate behavior (recommended: DAO projection/unit tests and an integration test with a large payload >2MB).

Codex Task

Summary by CodeRabbit

  • New Features

    • Persistent storage for webhook payloads.
    • Webhook IDs now use generated string identifiers.
  • Bug Fixes

    • Processing safely skips when a referenced stored payload is missing.
    • Stored payloads are removed after successful delivery, permanent failure, or enqueue errors.
  • Improvements

    • Queue stores payload references to reduce queue size and memory use.
    • Database schema upgraded with an automatic migration to version 20.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 2, 2026

Walkthrough

Adds file-backed webhook payload storage and integrates it into DI, repository, DAO/entity types, workers, and Room schema; enqueue persists payloads as file refs, processing reads stored payloads and deletes files on completion or permanent failure; DB version bumped to 20 with auto-migration.

Changes

Cohort / File(s) Summary
DI module
app/src/main/java/me/capcom/smsgateway/modules/webhooks/Module.kt
Registers WebhookPayloadStorage as a Koin singleton.
Payload storage
app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt
New class persisting payloads to filesDir/webhook_payloads, exposing save(key,payload): String (returns file:<key>), read(refOrInline): String?, and delete(key).
Repository
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt
Becomes a KoinComponent, injects WebhookPayloadStorage, generates string IDs, saves payloads as file refs, updates methods to use String IDs, deletes payload files on insert failure, completion, and permanent failure.
Workers
app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt, app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt
Processor injects WebhookPayloadStorage and reads payload via payloadStorage.read(...); aborts send if read returns null; failure/retry APIs updated to use String webhookId. SendWebhookWorker had only newline change.
DAO / Entity
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt, app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueEntity.kt
Entity id changed from auto-generated Long → caller-provided String; DAO methods updated to accept String IDs, insertWebhook no longer returns Long; added getOldEntryIds(cutoffTime: Long): List<String> and changed cleanup to accept explicit ID list.
Database schema
app/schemas/me.capcom.smsgateway.data.AppDatabase/20.json, app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt
Added Room schema JSON for version 20 and bumped @Database version to 20 with AutoMigration(from = 19, to = 20).

Sequence Diagram(s)

sequenceDiagram
    participant Enqueuer as Enqueuer
    participant Storage as WebhookPayloadStorage
    participant Repo as WebhookQueueRepository
    participant DB as Database/DAO
    participant Processor as WebhookQueueProcessorWorker
    participant Sender as SendWebhookWorker

    Enqueuer->>Storage: save(payload)
    Storage-->>Enqueuer: "file:<key>"
    Enqueuer->>Repo: enqueueWebhook(url, payloadRef="file:<key>")
    Repo->>DB: insert webhook row (id:String, payload="file:<key>")

    Processor->>DB: fetch pending webhook
    DB-->>Processor: webhook row (id:String, payload:"file:<key>")
    Processor->>Storage: read("file:<key>")
    Storage-->>Processor: payload JSON or null
    alt payload present
        Processor->>Sender: send HTTP request (payload JSON)
        Sender-->>Processor: result (success/failure)
        alt success
            Processor->>DB: markAsCompleted(id)
            Processor->>Storage: delete("<key>")
        else failure
            Processor->>Repo: scheduleRetry(id, error)
        end
    else payload missing
        Processor->>Repo: permanentlyFailWebhook(id, "missing payload")
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • [webhooks] webhooks queue #297: Modifies the same webhook-queue subsystem (Module DI, Queue entity/DAO/Repository, workers, AppDatabase schema); likely directly related.
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title '[webhooks] file-backed payload storage' accurately describes the main change: introducing file-backed storage for webhook payloads to avoid database issues with large payloads.
Docstring Coverage ✅ Passed Docstring coverage is 81.82% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch codex/fix-large-mms-attachment-issue

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt (1)

222-252: ⚠️ Potential issue | 🔴 Critical

Don't fall back to direct delivery once the webhook is already enqueued.

This try covers enqueueWebhook(), logging, and WebhookQueueProcessorWorker.start(). If anything after the insert throws, the catch path schedules the legacy worker too, so the same webhook can be delivered twice while the queued row still exists. The same broad catch also leaves an orphaned payload file behind when save() succeeds but enqueueing fails.

🔁 Narrow the fallback window and clean up orphaned payload refs
         fun start(
             context: Context,
             url: String,
             data: WebHookEventDTO,
             internetRequired: Boolean,
         ) {
             val logsService = get<LogsService>()
+            var payloadRef: String? = null
+            var queueEnqueued = false

             // Enqueue the webhook event to the persistent queue
             try {
                 val queueRepository = get<WebhookQueueRepository>()
                 val payloadStorage = get<WebhookPayloadStorage>()
-                val payloadRef = payloadStorage.save(gson.toJson(data))
+                payloadRef = payloadStorage.save(gson.toJson(data))

                 // Enqueue to the persistent queue
                 val queueId = runBlocking {
                     queueRepository.enqueueWebhook(
                         url = url,
-                        payload = payloadRef,
+                        payload = requireNotNull(payloadRef),
                     )
                 }
+                queueEnqueued = true

                 logsService.insert(
                     priority = LogEntry.Priority.DEBUG,
                     module = NAME,
                     message = "Webhook enqueued to persistent queue",
@@
                 WebhookQueueProcessorWorker.start(
                     context = context,
                     internetRequired = internetRequired
                 )

             } catch (e: Exception) {
+                if (queueEnqueued) {
+                    logsService.insert(
+                        priority = LogEntry.Priority.ERROR,
+                        module = NAME,
+                        message = "Webhook was already enqueued; skipping direct-send fallback: ${e.message}",
+                        context = mapOf("url" to url)
+                    )
+                    return
+                }
+                payloadRef?.let { runCatching { get<WebhookPayloadStorage>().delete(it) } }
+
                 // Fallback to direct processing if queue enqueue fails
                 logsService.insert(
                     priority = LogEntry.Priority.WARN,
                     module = NAME,
                     message = "Queue enqueue failed, falling back to direct processing: ${e.message}",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt`
around lines 222 - 252, The try/catch currently wraps payloadStorage.save,
queueRepository.enqueueWebhook, logsService.insert and
WebhookQueueProcessorWorker.start which allows a fallback to the legacy
direct-delivery worker even when enqueueWebhook succeeded and can leave orphaned
payload files; restrict the try so that payloadStorage.save and enqueueWebhook
are called and checked first, only schedule the legacy worker (the fallback)
inside a smaller catch that runs if enqueueWebhook fails, and on enqueue failure
delete the saved payloadRef (use WebhookPayloadStorage.delete or equivalent)
before throwing/logging; ensure WebhookQueueProcessorWorker.start and
logsService.insert only run after successful enqueue and do not trigger the
fallback when enqueueWebhook returns a valid queueId.
🧹 Nitpick comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt (1)

303-304: Distinguish missing payload files from retriable send failures.

WebhookPayloadStorage.read() returning null means the referenced file is already gone. Returning false here sends the row through the normal retry path, but retries cannot reconstruct that payload and only burn attempts on an unrecoverable condition. Consider surfacing a terminal failure result and logging it explicitly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt`
around lines 303 - 304, WebhookPayloadStorage.read(webhook.payload) returning
null indicates a terminal missing payload; don't treat it as a retriable send
failure. Update the processing in WebhookQueueProcessorWorker (the block using
payloadStorage.read(...) and gson.fromJson(...)) to detect a null payload, log
an explicit error including webhook.id and webhook.payload, and return/emit a
terminal failure result (not false) so the retry mechanism won’t requeue; only
proceed to gson.fromJson(...) and normal retry semantics when payload is
non-null.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 42-44: Change WebhookQueueDao.getById and
WebhookQueueRepository.getById to return a nullable WebhookQueueEntity? instead
of non-nullable to reflect Room's possible null result, then update both call
sites inside handleWebhookFailure to check for null (early return or no-op)
before using the entity; specifically modify the DAO signature, the repository
method getById(webhookId: Long): WebhookQueueEntity?, and adjust the two places
in handleWebhookFailure that call repository.getById(...) to handle the nullable
result safely (e.g., val entity = getById(id) ?: return or guard with if (entity
== null) { /* handle missing */ }).

---

Outside diff comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt`:
- Around line 222-252: The try/catch currently wraps payloadStorage.save,
queueRepository.enqueueWebhook, logsService.insert and
WebhookQueueProcessorWorker.start which allows a fallback to the legacy
direct-delivery worker even when enqueueWebhook succeeded and can leave orphaned
payload files; restrict the try so that payloadStorage.save and enqueueWebhook
are called and checked first, only schedule the legacy worker (the fallback)
inside a smaller catch that runs if enqueueWebhook fails, and on enqueue failure
delete the saved payloadRef (use WebhookPayloadStorage.delete or equivalent)
before throwing/logging; ensure WebhookQueueProcessorWorker.start and
logsService.insert only run after successful enqueue and do not trigger the
fallback when enqueueWebhook returns a valid queueId.

---

Nitpick comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt`:
- Around line 303-304: WebhookPayloadStorage.read(webhook.payload) returning
null indicates a terminal missing payload; don't treat it as a retriable send
failure. Update the processing in WebhookQueueProcessorWorker (the block using
payloadStorage.read(...) and gson.fromJson(...)) to detect a null payload, log
an explicit error including webhook.id and webhook.payload, and return/emit a
terminal failure result (not false) so the retry mechanism won’t requeue; only
proceed to gson.fromJson(...) and normal retry semantics when payload is
non-null.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: acd60be9-3651-4b2b-9e4b-01d04b37a4d9

📥 Commits

Reviewing files that changed from the base of the PR and between 4d6169b and 3fcc67f.

📒 Files selected for processing (6)
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/Module.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt
  • docs/plans/mms-large-attachments-plan.md

@capcom6 capcom6 linked an issue Apr 2, 2026 that may be closed by this pull request
@capcom6 capcom6 force-pushed the codex/fix-large-mms-attachment-issue branch from 3fcc67f to fe2bc5b Compare April 2, 2026 03:47
@capcom6 capcom6 changed the title File-backed webhook payload storage and lazy payload loading in queue processor [webhooks] file-backed payload storage Apr 2, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)

19-33: ⚠️ Potential issue | 🟠 Major

Rollback the saved payload file if the insert fails.

payloadStorage.save(payload) happens before dao.insertWebhook(...). If the insert throws, nothing points at that file anymore and it will never be cleaned up.

🩹 Minimal fix
     suspend fun enqueueWebhook(
         url: String,
         payload: String,
     ): Long {
         val payloadRef = payloadStorage.save(payload)

-        return dao.insertWebhook(
-            WebhookQueueEntity(
-                url = url,
-                payload = payloadRef,
-                status = WebhookStatus.PENDING,
-                createdAt = System.currentTimeMillis(),
-                nextAttempt = System.currentTimeMillis()
-            )
-        )
+        return try {
+            dao.insertWebhook(
+                WebhookQueueEntity(
+                    url = url,
+                    payload = payloadRef,
+                    status = WebhookStatus.PENDING,
+                    createdAt = System.currentTimeMillis(),
+                    nextAttempt = System.currentTimeMillis()
+                )
+            )
+        } catch (e: Exception) {
+            payloadStorage.delete(payloadRef)
+            throw e
+        }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
around lines 19 - 33, The saved payload file created by payloadStorage.save(...)
in enqueueWebhook can leak if dao.insertWebhook(...) throws; wrap the insert in
a try/catch that on any exception calls payloadStorage.delete(payloadRef) to
roll back the saved file and then rethrows the original exception (do not
swallow it). Specifically, in enqueueWebhook, keep the call to
payloadStorage.save(payload) to obtain payloadRef, then perform
dao.insertWebhook(WebhookQueueEntity(...)) inside a try block and in the catch
call payloadStorage.delete(payloadRef) before throwing the caught exception;
this ensures payloadStorage.save, payloadStorage.delete, enqueueWebhook, and
dao.insertWebhook are used to locate and fix the leak.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 60-62: The code currently deletes payload files using
webhookId.toString() and deletes before updating DB; instead, read the stored
payload reference (e.g., payloadFilename/payloadRef) from the webhook record,
perform the DAO status update (e.g., in completeWebhook, permanentlyFailWebhook
and the retry path such as failAndRetryWebhook), verify the update succeeded,
and only then call payloadStorage.delete(payloadRef); do not use
webhookId.toString() as the filename and do not delete prior to a successful DAO
update.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt`:
- Around line 38-42: The delete() method treats its key parameter as a raw
filename but save() stores names prefixed with "file:<uuid>" and read() already
handles that format; update delete(key: String) to normalize the incoming key by
stripping the "file:" prefix (if present) before constructing File(payloadDir,
normalizedName) and deleting it so callers that pass the queued payload value
actually remove the stored file; touch the WebhookPayloadStorage.delete method
to perform this normalization (consistent with save() and read()).

---

Outside diff comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 19-33: The saved payload file created by payloadStorage.save(...)
in enqueueWebhook can leak if dao.insertWebhook(...) throws; wrap the insert in
a try/catch that on any exception calls payloadStorage.delete(payloadRef) to
roll back the saved file and then rethrows the original exception (do not
swallow it). Specifically, in enqueueWebhook, keep the call to
payloadStorage.save(payload) to obtain payloadRef, then perform
dao.insertWebhook(WebhookQueueEntity(...)) inside a try block and in the catch
call payloadStorage.delete(payloadRef) before throwing the caught exception;
this ensures payloadStorage.save, payloadStorage.delete, enqueueWebhook, and
dao.insertWebhook are used to locate and fix the leak.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5c3ba2a4-7b21-4931-8c86-7bec5488f0ce

📥 Commits

Reviewing files that changed from the base of the PR and between fe2bc5b and aea7f32.

📒 Files selected for processing (5)
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt
✅ Files skipped from review due to trivial changes (2)
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt
🚧 Files skipped from review as they are similar to previous changes (1)
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)

20-37: 🛠️ Refactor suggestion | 🟠 Major

Return the actual queue id from enqueueWebhook().

Lines 24-31 generate the canonical queue id as id, but the method still returns the DAO result instead. That leaves this repository API out of sync with the rest of the queue surface, which now addresses rows by String, and any caller that keeps the return value cannot use it to refer back to the same webhook. Return id after a successful insert, or make this method Unit if nobody needs it.

🔧 Minimal alignment
-    ): Long {
+    ): String {
         val id = NanoIdUtils.randomNanoId()
         val payloadRef = payloadStorage.save(id, payload)
 
-        return dao.insertWebhook(
+        dao.insertWebhook(
             WebhookQueueEntity(
                 id = id,
                 url = url,
                 payload = payloadRef,
                 status = WebhookStatus.PENDING,
                 createdAt = System.currentTimeMillis(),
                 nextAttempt = System.currentTimeMillis()
             )
         )
+        return id
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
around lines 20 - 37, enqueueWebhook currently generates the canonical string id
via NanoIdUtils.randomNanoId() and uses it to save the payload and construct the
WebhookQueueEntity, but returns the DAO insert result (Long) instead of that id;
change enqueueWebhook (the suspend function enqueueWebhook) to return the
generated id (the variable id) after a successful
dao.insertWebhook(WebhookQueueEntity(...)) call (or make the function Unit if
the return value is truly unused) so callers receive the string queue id that
matches the stored entity and payloadStorage.save(id, payload) usage.
app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt (1)

36-56: ⚠️ Potential issue | 🟠 Major

Verify the 19→20 Room migration explicitly handles the webhook_queue.id primary key type change.

The migration from version 19 to 20 changes webhook_queue.id from INTEGER PRIMARY KEY AUTOINCREMENT to TEXT NOT NULL PRIMARY KEY. This is a primary key type rewrite—not a simple schema addition—and requires explicit verification before shipping. Currently, this relies on a bare AutoMigration(from = 19, to = 20) without an AutoMigrationSpec to guide Room through the transformation.

Room 2.4.3's auto-migrations handle additive changes (adding tables/columns, deleting tables/columns) but changing a primary key's type is complex. Either test this migration thoroughly on existing databases (v19 → v20), or add an explicit AutoMigrationSpec or MIGRATION_19_20 handler to control the transformation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt` around lines 36 -
56, The AutoMigration(from = 19, to = 20) is insufficient because
webhook_queue.id changes type (INTEGER PK -> TEXT PK); update AppDatabase to
replace the bare AutoMigration(from = 19, to = 20) with an explicit migration
strategy: either add an AutoMigrationSpec implementation for the 19→20
auto-migration or register a manual Migration object (e.g., MIGRATION_19_20)
that performs a controlled table copy/transform (create new table with TEXT
primary key, copy/convert rows, drop old table, rename) and include it in the
database builder; ensure the migration targets the webhook_queue.id column
conversion and is exercised by a unit/instrumentation migration test upgrading a
v19 DB to v20.
♻️ Duplicate comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)

63-65: ⚠️ Potential issue | 🟠 Major

Delete the stored payload ref after the status update.

As implemented in app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt:15-19, save() persists file:<key>. Lines 64 and 101 delete the raw webhookId before the DAO update, so completed/permanently-failed rows leak their payload files, and a DAO error here would leave a live row pointing at a missing payload. Pass the stored payload ref through these methods and delete it only after the status change succeeds.

🧹 Minimal fix
-    suspend fun completeWebhook(webhookId: String) {
-        payloadStorage.delete(webhookId)
-        dao.markAsCompleted(id = webhookId)
+    suspend fun completeWebhook(webhookId: String, payloadRef: String) {
+        dao.markAsCompleted(id = webhookId)
+        payloadStorage.delete(payloadRef)
     }
@@
-            permanentlyFailWebhook(
-                webhookId = webhookId,
-                error = error ?: "Max retries exceeded",
-            )
+            permanentlyFailWebhook(
+                webhookId = webhookId,
+                payloadRef = webhook.payload,
+                error = error ?: "Max retries exceeded",
+            )
@@
-    suspend fun permanentlyFailWebhook(webhookId: String, error: String) {
-        payloadStorage.delete(webhookId.toString())
-        dao.markAsPermanentlyFailed(
+    suspend fun permanentlyFailWebhook(webhookId: String, payloadRef: String, error: String) {
+        dao.markAsPermanentlyFailed(
             id = webhookId,
             error = error
         )
+        payloadStorage.delete(payloadRef)
     }

You’ll also need to pass webhook.payload from app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt:246-248.

Also applies to: 90-105

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
around lines 63 - 65, The code deletes the stored payload by key before updating
DB, causing leaked files or broken rows on DAO errors; modify completeWebhook
(and the analogous failure method used at 90-105) to accept and propagate the
stored payload ref (returned by WebhookPayloadStorage.save, e.g., the
"file:<key>" value) instead of deleting by webhookId, call
dao.markAsCompleted/markAsPermanentlyFailed first, and only call
payloadStorage.delete(payloadRef) after the DAO update succeeds; update callers
(notably WebhookQueueProcessorWorker where you currently pass webhookId at lines
~246-248) to pass webhook.payload (the stored payload ref) through into these
repository methods.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt`:
- Around line 301-302: The current code in WebhookQueueProcessorWorker forces
every webhook.payload through payloadStorage.read(webhook.payload) which breaks
pre-upgrade rows that contain inline JSON; change the logic so that if
webhook.payload begins with the storage prefix (e.g., "file:") you call
payloadStorage.read(webhook.payload) and otherwise treat webhook.payload as the
inline JSON payload string and pass that directly to gson.fromJson; update the
block around payloadStorage.read(...) and gson.fromJson(...) (where
webhook.payload is used) to implement this fallback so old queue rows remain
readable.

---

Outside diff comments:
In `@app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt`:
- Around line 36-56: The AutoMigration(from = 19, to = 20) is insufficient
because webhook_queue.id changes type (INTEGER PK -> TEXT PK); update
AppDatabase to replace the bare AutoMigration(from = 19, to = 20) with an
explicit migration strategy: either add an AutoMigrationSpec implementation for
the 19→20 auto-migration or register a manual Migration object (e.g.,
MIGRATION_19_20) that performs a controlled table copy/transform (create new
table with TEXT primary key, copy/convert rows, drop old table, rename) and
include it in the database builder; ensure the migration targets the
webhook_queue.id column conversion and is exercised by a unit/instrumentation
migration test upgrading a v19 DB to v20.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 20-37: enqueueWebhook currently generates the canonical string id
via NanoIdUtils.randomNanoId() and uses it to save the payload and construct the
WebhookQueueEntity, but returns the DAO insert result (Long) instead of that id;
change enqueueWebhook (the suspend function enqueueWebhook) to return the
generated id (the variable id) after a successful
dao.insertWebhook(WebhookQueueEntity(...)) call (or make the function Unit if
the return value is truly unused) so callers receive the string queue id that
matches the stored entity and payloadStorage.save(id, payload) usage.

---

Duplicate comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 63-65: The code deletes the stored payload by key before updating
DB, causing leaked files or broken rows on DAO errors; modify completeWebhook
(and the analogous failure method used at 90-105) to accept and propagate the
stored payload ref (returned by WebhookPayloadStorage.save, e.g., the
"file:<key>" value) instead of deleting by webhookId, call
dao.markAsCompleted/markAsPermanentlyFailed first, and only call
payloadStorage.delete(payloadRef) after the DAO update succeeds; update callers
(notably WebhookQueueProcessorWorker where you currently pass webhookId at lines
~246-248) to pass webhook.payload (the stored payload ref) through into these
repository methods.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3b73000e-24a9-4cc2-85b3-737cd6e72bae

📥 Commits

Reviewing files that changed from the base of the PR and between aea7f32 and dea85e4.

📒 Files selected for processing (7)
  • app/schemas/me.capcom.smsgateway.data.AppDatabase/20.json
  • app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueEntity.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt
🚧 Files skipped from review as they are similar to previous changes (1)
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)

65-67: ⚠️ Potential issue | 🟠 Major

Persist the terminal state before deleting the payload file.

Lines 66 and 103 delete first. If the DAO update fails afterward, the queue entry stays live but its backing payload is gone, so retries or stuck-worker recovery cannot re-read it.

🩹 Minimal fix
     suspend fun completeWebhook(webhookId: String) {
-        payloadStorage.delete(webhookId)
         dao.markAsCompleted(id = webhookId)
+        payloadStorage.delete(webhookId)
     }
@@
     suspend fun permanentlyFailWebhook(webhookId: String, error: String) {
-        payloadStorage.delete(webhookId.toString())
         dao.markAsPermanentlyFailed(
             id = webhookId,
             error = error
         )
+        payloadStorage.delete(webhookId)
     }

Also applies to: 102-107

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
around lines 65 - 67, completeWebhook currently deletes the payload via
payloadStorage.delete(webhookId) before persisting the terminal state with
dao.markAsCompleted(id = webhookId); change the order so you first call
dao.markAsCompleted(...) and only upon successful completion delete the payload.
Ensure the dao.markAsCompleted call is awaited/checked for success and
handle/propagate errors so payload is not removed on failure; apply the same
change to the other method that also calls payloadStorage.delete before updating
the DAO (i.e., any code paths using payloadStorage.delete + dao.* update).
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt (1)

20-21: ⚠️ Potential issue | 🟠 Major

Make getById() nullable.

Line 21 cannot represent a missing row safely. This lookup is used on recovery paths, so WebhookQueueEntity? is the right contract; otherwise callers can only discover "not found" via a runtime crash.

🩹 Minimal fix
-    suspend fun getById(id: String): WebhookQueueEntity
+    suspend fun getById(id: String): WebhookQueueEntity?

Then update downstream callers to handle the null case explicitly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt`
around lines 20 - 21, Change the DAO contract for the lookup to return a
nullable entity so missing rows are represented safely: update the suspend
function WebhookQueueDao.getById(id: String): WebhookQueueEntity to return
WebhookQueueEntity? and then locate and update all downstream callers that
invoke getById to handle the null case explicitly (e.g., recovery path branches
or early returns) instead of assuming a non-null result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 24-38: The code saves the payload via payloadStorage.save(id,
payload) before inserting the DB row with dao.insertWebhook(...), so if
insertWebhook throws you get an orphaned file; fix by surrounding
dao.insertWebhook(WebhookQueueEntity(...)) with a try/catch that on any
exception calls payloadStorage.delete(payloadRef) (or the appropriate removal
method) to remove the saved payload and then rethrow the exception; use the
existing id/payloadRef from NanoIdUtils.randomNanoId()/payloadStorage.save to
identify the file and ensure cleanup happens before propagating the error.

---

Duplicate comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt`:
- Around line 20-21: Change the DAO contract for the lookup to return a nullable
entity so missing rows are represented safely: update the suspend function
WebhookQueueDao.getById(id: String): WebhookQueueEntity to return
WebhookQueueEntity? and then locate and update all downstream callers that
invoke getById to handle the null case explicitly (e.g., recovery path branches
or early returns) instead of assuming a non-null result.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 65-67: completeWebhook currently deletes the payload via
payloadStorage.delete(webhookId) before persisting the terminal state with
dao.markAsCompleted(id = webhookId); change the order so you first call
dao.markAsCompleted(...) and only upon successful completion delete the payload.
Ensure the dao.markAsCompleted call is awaited/checked for success and
handle/propagate errors so payload is not removed on failure; apply the same
change to the other method that also calls payloadStorage.delete before updating
the DAO (i.e., any code paths using payloadStorage.delete + dao.* update).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e9f741ed-2a2b-42ca-94dd-bf066d0a4bcc

📥 Commits

Reviewing files that changed from the base of the PR and between dea85e4 and 0335fa7.

📒 Files selected for processing (2)
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)

24-28: ⚠️ Potential issue | 🔴 Critical

Critical: Duplicate payloadStorage.save() call leaks files.

payloadStorage.save(id, payload) is called twice—once at line 25 (outside try) and again at line 28 (inside try). The outer payloadRef is shadowed and never used, so a file is written and immediately orphaned on every enqueue. Remove lines 25-26.

🐛 Proposed fix
     ): String {
         val id = NanoIdUtils.randomNanoId()
-        val payloadRef = payloadStorage.save(id, payload)
-
         try {
             val payloadRef = payloadStorage.save(id, payload)
             dao.insertWebhook(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
around lines 24 - 28, The code calls payloadStorage.save(id, payload) twice
which creates orphaned files; remove the redundant call so payload is saved
exactly once inside the try block. Specifically, delete the outer val payloadRef
= payloadStorage.save(id, payload) and ensure the single payloadRef declared
inside the try (from payloadStorage.save(id, payload)) is the one used/returned
by the enclosing method (e.g., in WebhookQueueRepository functions that call
NanoIdUtils.randomNanoId() and payloadStorage.save). Also ensure you don't
shadow payloadRef and that any subsequent logic references the try-block
payloadRef.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 39-42: The catch block is deleting by the original id instead of
the storage reference returned by payloadStorage.save; capture the returned
payloadRef from payloadStorage.save(...) before calling the DAO (e.g., assign
payloadRef = payloadStorage.save(...)), pass that payloadRef into the DAO insert
call if needed, and in the catch use payloadStorage.delete(payloadRef) (not
delete(id)) so the actual stored file (including any transformed reference like
"file:...") is removed; update WebhookQueueRepository methods that call
payloadStorage.save and payloadStorage.delete accordingly.

---

Duplicate comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 24-28: The code calls payloadStorage.save(id, payload) twice which
creates orphaned files; remove the redundant call so payload is saved exactly
once inside the try block. Specifically, delete the outer val payloadRef =
payloadStorage.save(id, payload) and ensure the single payloadRef declared
inside the try (from payloadStorage.save(id, payload)) is the one used/returned
by the enclosing method (e.g., in WebhookQueueRepository functions that call
NanoIdUtils.randomNanoId() and payloadStorage.save). Also ensure you don't
shadow payloadRef and that any subsequent logic references the try-block
payloadRef.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 74d4950e-cd61-49d9-b892-c02dad543610

📥 Commits

Reviewing files that changed from the base of the PR and between 0335fa7 and 68b2009.

📒 Files selected for processing (1)
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (2)

38-40: ⚠️ Potential issue | 🟠 Major

Delete the saved payload by payloadRef, not the webhook id.

WebhookQueueEntity.payload stores the reference returned by payloadStorage.save(...). If insertWebhook() throws, delete(id) will miss a file:... ref and leave the file orphaned.

🩹 Minimal fix
         } catch (e: Exception) {
-            payloadStorage.delete(id)
+            payloadStorage.delete(payloadRef)
             throw e
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
around lines 38 - 40, In the catch block inside WebhookQueueRepository (around
the insertWebhook flow) replace the call to payloadStorage.delete(id) with
deletion using the saved payload reference (the value returned by
payloadStorage.save, e.g., payloadRef or WebhookQueueEntity.payload) so the
stored file ref (file:...) is removed if insertWebhook() throws; ensure you
reference the same variable used when calling payloadStorage.save(...) rather
than the webhook id.

70-72: ⚠️ Potential issue | 🟠 Major

Use the stored payload ref here too, and delete it only after the DAO update succeeds.

These paths currently delete webhookId, which cannot resolve the stored file:... ref. That leaks files today, and the pre-update delete ordering is still risky because a successful cleanup plus failed DAO write would leave a live row without its payload.

🩹 Minimal fix
-    suspend fun completeWebhook(webhookId: String) {
-        payloadStorage.delete(webhookId)
-        dao.markAsCompleted(id = webhookId)
+    suspend fun completeWebhook(webhookId: String, payloadRef: String) {
+        dao.markAsCompleted(id = webhookId)
+        payloadStorage.delete(payloadRef)
     }
@@
             permanentlyFailWebhook(
                 webhookId = webhookId,
+                payloadRef = webhook.payload,
                 error = error ?: "Max retries exceeded",
             )
@@
-    suspend fun permanentlyFailWebhook(webhookId: String, error: String) {
-        payloadStorage.delete(webhookId.toString())
+    suspend fun permanentlyFailWebhook(webhookId: String, payloadRef: String, error: String) {
         dao.markAsPermanentlyFailed(
             id = webhookId,
             error = error
         )
+        payloadStorage.delete(payloadRef)
     }

Also applies to: 97-100, 107-112

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
around lines 70 - 72, In completeWebhook (and the other similar methods around
the other occurrences), read the stored payload reference from the DAO/row (the
file:... ref) and use that when deleting from payloadStorage instead of passing
webhookId; also move the delete to after the DAO update so the sequence is:
fetch payloadRef from the record, call dao.markAsCompleted/markAsFailed/etc.
with the id, then if the DAO update succeeded call
payloadStorage.delete(payloadRef); update code paths that currently call
payloadStorage.delete(webhookId) to use the retrieved payloadRef and ensure
deletion happens post-DAO-write to avoid leaks or orphaned rows (refer to
payloadStorage and dao.markAsCompleted/markAsFailed/whatever methods to locate
the spots).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 38-40: In the catch block inside WebhookQueueRepository (around
the insertWebhook flow) replace the call to payloadStorage.delete(id) with
deletion using the saved payload reference (the value returned by
payloadStorage.save, e.g., payloadRef or WebhookQueueEntity.payload) so the
stored file ref (file:...) is removed if insertWebhook() throws; ensure you
reference the same variable used when calling payloadStorage.save(...) rather
than the webhook id.
- Around line 70-72: In completeWebhook (and the other similar methods around
the other occurrences), read the stored payload reference from the DAO/row (the
file:... ref) and use that when deleting from payloadStorage instead of passing
webhookId; also move the delete to after the DAO update so the sequence is:
fetch payloadRef from the record, call dao.markAsCompleted/markAsFailed/etc.
with the id, then if the DAO update succeeded call
payloadStorage.delete(payloadRef); update code paths that currently call
payloadStorage.delete(webhookId) to use the retrieved payloadRef and ensure
deletion happens post-DAO-write to avoid leaks or orphaned rows (refer to
payloadStorage and dao.markAsCompleted/markAsFailed/whatever methods to locate
the spots).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bd892798-905f-4f65-979c-4b28740ac9be

📥 Commits

Reviewing files that changed from the base of the PR and between 68b2009 and 30ca877.

📒 Files selected for processing (1)
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt

@capcom6 capcom6 force-pushed the codex/fix-large-mms-attachment-issue branch from 30ca877 to e9cb23c Compare April 9, 2026 02:16
@capcom6 capcom6 marked this pull request as ready for review April 10, 2026 00:41
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 10, 2026

🤖 Pull request artifacts

file commit
app-release.apk 03eab39
app-release.aab 03eab39
app-insecure.apk 03eab39
app-insecure.aab 03eab39

@capcom6 capcom6 force-pushed the codex/fix-large-mms-attachment-issue branch from de9c5f5 to 03eab39 Compare April 14, 2026 00:43
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (2)

107-113: ⚠️ Potential issue | 🟠 Major

Same ordering issue and redundant .toString().

  1. Delete should happen after dao.markAsPermanentlyFailed() succeeds to avoid orphaned rows without payloads.
  2. webhookId is already String, so .toString() is redundant.
🩹 Proposed fix
 suspend fun permanentlyFailWebhook(webhookId: String, error: String) {
-    payloadStorage.delete(webhookId.toString())
     dao.markAsPermanentlyFailed(
         id = webhookId,
         error = error
     )
+    payloadStorage.delete(webhookId)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
around lines 107 - 113, In permanentlyFailWebhook, reverse the operations so
dao.markAsPermanentlyFailed(...) is called first and only on success call
payloadStorage.delete(...); also remove the redundant webhookId.toString() and
pass webhookId directly to both dao.markAsPermanentlyFailed (id) and
payloadStorage.delete so you don't delete the payload prematurely or use
unnecessary conversions.

84-84: ⚠️ Potential issue | 🟠 Major

dao.getById() will throw an exception if the webhook is deleted between startProcessing and this call.

The return type WebhookQueueEntity is non-nullable. If the webhook row is removed (e.g., by cleanup or a race condition) between startProcessing (line 64) and this call, Room will throw an exception since it cannot return null for a non-nullable type, and there's no error handling in place. Change the return type to WebhookQueueEntity? and handle the null case gracefully.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
at line 84, The DAO call dao.getById(webhookId) currently expects a non-null
WebhookQueueEntity and will throw if the row was deleted after startProcessing;
change the DAO method signature to return WebhookQueueEntity? (nullable) and
update WebhookQueueRepository's code that calls getById (the startProcessing
flow) to handle a null result gracefully—e.g., detect null, log/metrics and
abort processing or mark the job as finished—so that deleted rows do not cause
an exception; reference: dao.getById, WebhookQueueEntity, startProcessing,
WebhookQueueRepository.
♻️ Duplicate comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)

70-73: ⚠️ Potential issue | 🟠 Major

Delete payload after successful DB update, not before.

If dao.markAsCompleted() throws after the payload is deleted, the queue row remains with status processing but its payload file is gone. Subsequent retry attempts will fail to read the payload, leaving the webhook in a corrupted state until it becomes permanently failed.

🩹 Proposed fix
 suspend fun completeWebhook(webhookId: String) {
-    payloadStorage.delete(webhookId)
     dao.markAsCompleted(id = webhookId)
+    payloadStorage.delete(webhookId)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
around lines 70 - 73, The current completeWebhook function deletes the payload
before updating the DB which can leave the queue row stuck in "processing"
without its payload if dao.markAsCompleted(id = webhookId) throws; change the
ordering so dao.markAsCompleted(webhookId) is executed first and only call
payloadStorage.delete(webhookId) after it returns successfully, and optionally
wrap dao.markAsCompleted in a try/catch to avoid deleting on failure (or rethrow
after ensuring payload remains); references: completeWebhook,
payloadStorage.delete, dao.markAsCompleted.
🧹 Nitpick comments (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)

120-122: Deletion by ID is correct, but consider transaction safety.

The IDs returned by getOldEntryIds are NanoIds (the storage keys), so payloadStorage.delete(it) correctly removes the files. However, if dao.cleanupOldEntries() fails mid-way, some files may be deleted while their DB rows persist, or vice versa if new entries are added between the two calls. For robustness, consider wrapping in a transaction or accepting eventual consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`
around lines 120 - 122, The current sequence in WebhookQueueRepository (calling
dao.getOldEntryIds, payloadStorage.delete for each id, then
dao.cleanupOldEntries) can leave storage and DB out-of-sync if
dao.cleanupOldEntries fails; wrap the delete operations in a DB transaction or
switch to a two-phase approach: begin a transaction in the DAO (or add a
dao.markEntriesForDeletion method), fetch and mark the entries (e.g., set a
pending_delete flag) via WebhookQueueRepository, commit the DB change, then
delete payloadStorage files for the marked NanoId keys and finally remove rows
with dao.cleanupMarkedEntries (or perform deletions inside the same transaction
if your storage supports transactional semantics) so that functions
dao.getOldEntryIds, dao.cleanupOldEntries, and payloadStorage.delete are
coordinated and atomic.
app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt (1)

371-373: HTTP client closed after every request creates unnecessary overhead.

The client.close() in the finally block means a new HttpClient instance is created and destroyed for each webhook. This is inefficient when processing batches. Consider making the client a class-level property with lazy initialization and closing it in a cleanup method, or reusing across the batch.

♻️ Suggested improvement
-    private val client
-        get() = HttpClient(OkHttp) {
+    private val client by lazy {
+        HttpClient(OkHttp) {
             install(HttpTimeout) {
                 ...
             }
             ...
         }
+    }

Then remove the client.close() from the finally block, or close it once at worker completion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt`
around lines 371 - 373, The HttpClient is being closed in the finally block for
each webhook which forces recreation per request; move the client to a
class-level property in WebhookQueueProcessorWorker (e.g., a private lateinit or
lazy val named client) so it is created once and reused across batch processing
(the code that currently constructs the local client and calls client.close() in
the finally should stop doing that), remove the per-request client.close() call,
and add a single cleanup/shutdown method on WebhookQueueProcessorWorker (or hook
into the worker lifecycle) to close the client once when the worker completes or
is disposed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 107-113: In permanentlyFailWebhook, reverse the operations so
dao.markAsPermanentlyFailed(...) is called first and only on success call
payloadStorage.delete(...); also remove the redundant webhookId.toString() and
pass webhookId directly to both dao.markAsPermanentlyFailed (id) and
payloadStorage.delete so you don't delete the payload prematurely or use
unnecessary conversions.
- Line 84: The DAO call dao.getById(webhookId) currently expects a non-null
WebhookQueueEntity and will throw if the row was deleted after startProcessing;
change the DAO method signature to return WebhookQueueEntity? (nullable) and
update WebhookQueueRepository's code that calls getById (the startProcessing
flow) to handle a null result gracefully—e.g., detect null, log/metrics and
abort processing or mark the job as finished—so that deleted rows do not cause
an exception; reference: dao.getById, WebhookQueueEntity, startProcessing,
WebhookQueueRepository.

---

Duplicate comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 70-73: The current completeWebhook function deletes the payload
before updating the DB which can leave the queue row stuck in "processing"
without its payload if dao.markAsCompleted(id = webhookId) throws; change the
ordering so dao.markAsCompleted(webhookId) is executed first and only call
payloadStorage.delete(webhookId) after it returns successfully, and optionally
wrap dao.markAsCompleted in a try/catch to avoid deleting on failure (or rethrow
after ensuring payload remains); references: completeWebhook,
payloadStorage.delete, dao.markAsCompleted.

---

Nitpick comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 120-122: The current sequence in WebhookQueueRepository (calling
dao.getOldEntryIds, payloadStorage.delete for each id, then
dao.cleanupOldEntries) can leave storage and DB out-of-sync if
dao.cleanupOldEntries fails; wrap the delete operations in a DB transaction or
switch to a two-phase approach: begin a transaction in the DAO (or add a
dao.markEntriesForDeletion method), fetch and mark the entries (e.g., set a
pending_delete flag) via WebhookQueueRepository, commit the DB change, then
delete payloadStorage files for the marked NanoId keys and finally remove rows
with dao.cleanupMarkedEntries (or perform deletions inside the same transaction
if your storage supports transactional semantics) so that functions
dao.getOldEntryIds, dao.cleanupOldEntries, and payloadStorage.delete are
coordinated and atomic.

In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt`:
- Around line 371-373: The HttpClient is being closed in the finally block for
each webhook which forces recreation per request; move the client to a
class-level property in WebhookQueueProcessorWorker (e.g., a private lateinit or
lazy val named client) so it is created once and reused across batch processing
(the code that currently constructs the local client and calls client.close() in
the finally should stop doing that), remove the per-request client.close() call,
and add a single cleanup/shutdown method on WebhookQueueProcessorWorker (or hook
into the worker lifecycle) to close the client once when the worker completes or
is disposed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: dae6d6ab-5810-4fa1-88ca-032b6f7d6f64

📥 Commits

Reviewing files that changed from the base of the PR and between de9c5f5 and 03eab39.

📒 Files selected for processing (9)
  • app/schemas/me.capcom.smsgateway.data.AppDatabase/20.json
  • app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/Module.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueEntity.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt
✅ Files skipped from review due to trivial changes (4)
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/Module.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt
  • app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt
  • app/schemas/me.capcom.smsgateway.data.AppDatabase/20.json
🚧 Files skipped from review as they are similar to previous changes (3)
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueEntity.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt
  • app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt

@capcom6 capcom6 merged commit a9c13f9 into master Apr 14, 2026
3 checks passed
@capcom6 capcom6 deleted the codex/fix-large-mms-attachment-issue branch April 14, 2026 00:57
@coderabbitai coderabbitai Bot mentioned this pull request Apr 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Soft lock on large mms:downloaded webhooks

1 participant