Conversation
WalkthroughAdds file-backed webhook payload storage and integrates it into DI, repository, DAO/entity types, workers, and Room schema; enqueue persists payloads as file refs, processing reads stored payloads and deletes files on completion or permanent failure; DB version bumped to 20 with auto-migration. Changes
Sequence Diagram(s)sequenceDiagram
participant Enqueuer as Enqueuer
participant Storage as WebhookPayloadStorage
participant Repo as WebhookQueueRepository
participant DB as Database/DAO
participant Processor as WebhookQueueProcessorWorker
participant Sender as SendWebhookWorker
Enqueuer->>Storage: save(payload)
Storage-->>Enqueuer: "file:<key>"
Enqueuer->>Repo: enqueueWebhook(url, payloadRef="file:<key>")
Repo->>DB: insert webhook row (id:String, payload="file:<key>")
Processor->>DB: fetch pending webhook
DB-->>Processor: webhook row (id:String, payload:"file:<key>")
Processor->>Storage: read("file:<key>")
Storage-->>Processor: payload JSON or null
alt payload present
Processor->>Sender: send HTTP request (payload JSON)
Sender-->>Processor: result (success/failure)
alt success
Processor->>DB: markAsCompleted(id)
Processor->>Storage: delete("<key>")
else failure
Processor->>Repo: scheduleRetry(id, error)
end
else payload missing
Processor->>Repo: permanentlyFailWebhook(id, "missing payload")
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt (1)
222-252:⚠️ Potential issue | 🔴 CriticalDon't fall back to direct delivery once the webhook is already enqueued.
This
trycoversenqueueWebhook(), logging, andWebhookQueueProcessorWorker.start(). If anything after the insert throws, the catch path schedules the legacy worker too, so the same webhook can be delivered twice while the queued row still exists. The same broad catch also leaves an orphaned payload file behind whensave()succeeds but enqueueing fails.🔁 Narrow the fallback window and clean up orphaned payload refs
fun start( context: Context, url: String, data: WebHookEventDTO, internetRequired: Boolean, ) { val logsService = get<LogsService>() + var payloadRef: String? = null + var queueEnqueued = false // Enqueue the webhook event to the persistent queue try { val queueRepository = get<WebhookQueueRepository>() val payloadStorage = get<WebhookPayloadStorage>() - val payloadRef = payloadStorage.save(gson.toJson(data)) + payloadRef = payloadStorage.save(gson.toJson(data)) // Enqueue to the persistent queue val queueId = runBlocking { queueRepository.enqueueWebhook( url = url, - payload = payloadRef, + payload = requireNotNull(payloadRef), ) } + queueEnqueued = true logsService.insert( priority = LogEntry.Priority.DEBUG, module = NAME, message = "Webhook enqueued to persistent queue", @@ WebhookQueueProcessorWorker.start( context = context, internetRequired = internetRequired ) } catch (e: Exception) { + if (queueEnqueued) { + logsService.insert( + priority = LogEntry.Priority.ERROR, + module = NAME, + message = "Webhook was already enqueued; skipping direct-send fallback: ${e.message}", + context = mapOf("url" to url) + ) + return + } + payloadRef?.let { runCatching { get<WebhookPayloadStorage>().delete(it) } } + // Fallback to direct processing if queue enqueue fails logsService.insert( priority = LogEntry.Priority.WARN, module = NAME, message = "Queue enqueue failed, falling back to direct processing: ${e.message}",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt` around lines 222 - 252, The try/catch currently wraps payloadStorage.save, queueRepository.enqueueWebhook, logsService.insert and WebhookQueueProcessorWorker.start which allows a fallback to the legacy direct-delivery worker even when enqueueWebhook succeeded and can leave orphaned payload files; restrict the try so that payloadStorage.save and enqueueWebhook are called and checked first, only schedule the legacy worker (the fallback) inside a smaller catch that runs if enqueueWebhook fails, and on enqueue failure delete the saved payloadRef (use WebhookPayloadStorage.delete or equivalent) before throwing/logging; ensure WebhookQueueProcessorWorker.start and logsService.insert only run after successful enqueue and do not trigger the fallback when enqueueWebhook returns a valid queueId.
🧹 Nitpick comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt (1)
303-304: Distinguish missing payload files from retriable send failures.
WebhookPayloadStorage.read()returningnullmeans the referenced file is already gone. Returningfalsehere sends the row through the normal retry path, but retries cannot reconstruct that payload and only burn attempts on an unrecoverable condition. Consider surfacing a terminal failure result and logging it explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt` around lines 303 - 304, WebhookPayloadStorage.read(webhook.payload) returning null indicates a terminal missing payload; don't treat it as a retriable send failure. Update the processing in WebhookQueueProcessorWorker (the block using payloadStorage.read(...) and gson.fromJson(...)) to detect a null payload, log an explicit error including webhook.id and webhook.payload, and return/emit a terminal failure result (not false) so the retry mechanism won’t requeue; only proceed to gson.fromJson(...) and normal retry semantics when payload is non-null.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 42-44: Change WebhookQueueDao.getById and
WebhookQueueRepository.getById to return a nullable WebhookQueueEntity? instead
of non-nullable to reflect Room's possible null result, then update both call
sites inside handleWebhookFailure to check for null (early return or no-op)
before using the entity; specifically modify the DAO signature, the repository
method getById(webhookId: Long): WebhookQueueEntity?, and adjust the two places
in handleWebhookFailure that call repository.getById(...) to handle the nullable
result safely (e.g., val entity = getById(id) ?: return or guard with if (entity
== null) { /* handle missing */ }).
---
Outside diff comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt`:
- Around line 222-252: The try/catch currently wraps payloadStorage.save,
queueRepository.enqueueWebhook, logsService.insert and
WebhookQueueProcessorWorker.start which allows a fallback to the legacy
direct-delivery worker even when enqueueWebhook succeeded and can leave orphaned
payload files; restrict the try so that payloadStorage.save and enqueueWebhook
are called and checked first, only schedule the legacy worker (the fallback)
inside a smaller catch that runs if enqueueWebhook fails, and on enqueue failure
delete the saved payloadRef (use WebhookPayloadStorage.delete or equivalent)
before throwing/logging; ensure WebhookQueueProcessorWorker.start and
logsService.insert only run after successful enqueue and do not trigger the
fallback when enqueueWebhook returns a valid queueId.
---
Nitpick comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt`:
- Around line 303-304: WebhookPayloadStorage.read(webhook.payload) returning
null indicates a terminal missing payload; don't treat it as a retriable send
failure. Update the processing in WebhookQueueProcessorWorker (the block using
payloadStorage.read(...) and gson.fromJson(...)) to detect a null payload, log
an explicit error including webhook.id and webhook.payload, and return/emit a
terminal failure result (not false) so the retry mechanism won’t requeue; only
proceed to gson.fromJson(...) and normal retry semantics when payload is
non-null.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: acd60be9-3651-4b2b-9e4b-01d04b37a4d9
📒 Files selected for processing (6)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/Module.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.ktdocs/plans/mms-large-attachments-plan.md
3fcc67f to
fe2bc5b
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)
19-33:⚠️ Potential issue | 🟠 MajorRollback the saved payload file if the insert fails.
payloadStorage.save(payload)happens beforedao.insertWebhook(...). If the insert throws, nothing points at that file anymore and it will never be cleaned up.🩹 Minimal fix
suspend fun enqueueWebhook( url: String, payload: String, ): Long { val payloadRef = payloadStorage.save(payload) - return dao.insertWebhook( - WebhookQueueEntity( - url = url, - payload = payloadRef, - status = WebhookStatus.PENDING, - createdAt = System.currentTimeMillis(), - nextAttempt = System.currentTimeMillis() - ) - ) + return try { + dao.insertWebhook( + WebhookQueueEntity( + url = url, + payload = payloadRef, + status = WebhookStatus.PENDING, + createdAt = System.currentTimeMillis(), + nextAttempt = System.currentTimeMillis() + ) + ) + } catch (e: Exception) { + payloadStorage.delete(payloadRef) + throw e + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` around lines 19 - 33, The saved payload file created by payloadStorage.save(...) in enqueueWebhook can leak if dao.insertWebhook(...) throws; wrap the insert in a try/catch that on any exception calls payloadStorage.delete(payloadRef) to roll back the saved file and then rethrows the original exception (do not swallow it). Specifically, in enqueueWebhook, keep the call to payloadStorage.save(payload) to obtain payloadRef, then perform dao.insertWebhook(WebhookQueueEntity(...)) inside a try block and in the catch call payloadStorage.delete(payloadRef) before throwing the caught exception; this ensures payloadStorage.save, payloadStorage.delete, enqueueWebhook, and dao.insertWebhook are used to locate and fix the leak.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 60-62: The code currently deletes payload files using
webhookId.toString() and deletes before updating DB; instead, read the stored
payload reference (e.g., payloadFilename/payloadRef) from the webhook record,
perform the DAO status update (e.g., in completeWebhook, permanentlyFailWebhook
and the retry path such as failAndRetryWebhook), verify the update succeeded,
and only then call payloadStorage.delete(payloadRef); do not use
webhookId.toString() as the filename and do not delete prior to a successful DAO
update.
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt`:
- Around line 38-42: The delete() method treats its key parameter as a raw
filename but save() stores names prefixed with "file:<uuid>" and read() already
handles that format; update delete(key: String) to normalize the incoming key by
stripping the "file:" prefix (if present) before constructing File(payloadDir,
normalizedName) and deleting it so callers that pass the queued payload value
actually remove the stored file; touch the WebhookPayloadStorage.delete method
to perform this normalization (consistent with save() and read()).
---
Outside diff comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 19-33: The saved payload file created by payloadStorage.save(...)
in enqueueWebhook can leak if dao.insertWebhook(...) throws; wrap the insert in
a try/catch that on any exception calls payloadStorage.delete(payloadRef) to
roll back the saved file and then rethrows the original exception (do not
swallow it). Specifically, in enqueueWebhook, keep the call to
payloadStorage.save(payload) to obtain payloadRef, then perform
dao.insertWebhook(WebhookQueueEntity(...)) inside a try block and in the catch
call payloadStorage.delete(payloadRef) before throwing the caught exception;
this ensures payloadStorage.save, payloadStorage.delete, enqueueWebhook, and
dao.insertWebhook are used to locate and fix the leak.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5c3ba2a4-7b21-4931-8c86-7bec5488f0ce
📒 Files selected for processing (5)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt
✅ Files skipped from review due to trivial changes (2)
- app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt
- app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt
🚧 Files skipped from review as they are similar to previous changes (1)
- app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)
20-37: 🛠️ Refactor suggestion | 🟠 MajorReturn the actual queue id from
enqueueWebhook().Lines 24-31 generate the canonical queue id as
id, but the method still returns the DAO result instead. That leaves this repository API out of sync with the rest of the queue surface, which now addresses rows byString, and any caller that keeps the return value cannot use it to refer back to the same webhook. Returnidafter a successful insert, or make this methodUnitif nobody needs it.🔧 Minimal alignment
- ): Long { + ): String { val id = NanoIdUtils.randomNanoId() val payloadRef = payloadStorage.save(id, payload) - return dao.insertWebhook( + dao.insertWebhook( WebhookQueueEntity( id = id, url = url, payload = payloadRef, status = WebhookStatus.PENDING, createdAt = System.currentTimeMillis(), nextAttempt = System.currentTimeMillis() ) ) + return id }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` around lines 20 - 37, enqueueWebhook currently generates the canonical string id via NanoIdUtils.randomNanoId() and uses it to save the payload and construct the WebhookQueueEntity, but returns the DAO insert result (Long) instead of that id; change enqueueWebhook (the suspend function enqueueWebhook) to return the generated id (the variable id) after a successful dao.insertWebhook(WebhookQueueEntity(...)) call (or make the function Unit if the return value is truly unused) so callers receive the string queue id that matches the stored entity and payloadStorage.save(id, payload) usage.app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt (1)
36-56:⚠️ Potential issue | 🟠 MajorVerify the 19→20 Room migration explicitly handles the
webhook_queue.idprimary key type change.The migration from version 19 to 20 changes
webhook_queue.idfromINTEGER PRIMARY KEY AUTOINCREMENTtoTEXT NOT NULL PRIMARY KEY. This is a primary key type rewrite—not a simple schema addition—and requires explicit verification before shipping. Currently, this relies on a bareAutoMigration(from = 19, to = 20)without anAutoMigrationSpecto guide Room through the transformation.Room 2.4.3's auto-migrations handle additive changes (adding tables/columns, deleting tables/columns) but changing a primary key's type is complex. Either test this migration thoroughly on existing databases (v19 → v20), or add an explicit
AutoMigrationSpecorMIGRATION_19_20handler to control the transformation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt` around lines 36 - 56, The AutoMigration(from = 19, to = 20) is insufficient because webhook_queue.id changes type (INTEGER PK -> TEXT PK); update AppDatabase to replace the bare AutoMigration(from = 19, to = 20) with an explicit migration strategy: either add an AutoMigrationSpec implementation for the 19→20 auto-migration or register a manual Migration object (e.g., MIGRATION_19_20) that performs a controlled table copy/transform (create new table with TEXT primary key, copy/convert rows, drop old table, rename) and include it in the database builder; ensure the migration targets the webhook_queue.id column conversion and is exercised by a unit/instrumentation migration test upgrading a v19 DB to v20.
♻️ Duplicate comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)
63-65:⚠️ Potential issue | 🟠 MajorDelete the stored payload ref after the status update.
As implemented in
app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt:15-19,save()persistsfile:<key>. Lines 64 and 101 delete the rawwebhookIdbefore the DAO update, so completed/permanently-failed rows leak their payload files, and a DAO error here would leave a live row pointing at a missing payload. Pass the storedpayloadref through these methods and delete it only after the status change succeeds.🧹 Minimal fix
- suspend fun completeWebhook(webhookId: String) { - payloadStorage.delete(webhookId) - dao.markAsCompleted(id = webhookId) + suspend fun completeWebhook(webhookId: String, payloadRef: String) { + dao.markAsCompleted(id = webhookId) + payloadStorage.delete(payloadRef) } @@ - permanentlyFailWebhook( - webhookId = webhookId, - error = error ?: "Max retries exceeded", - ) + permanentlyFailWebhook( + webhookId = webhookId, + payloadRef = webhook.payload, + error = error ?: "Max retries exceeded", + ) @@ - suspend fun permanentlyFailWebhook(webhookId: String, error: String) { - payloadStorage.delete(webhookId.toString()) - dao.markAsPermanentlyFailed( + suspend fun permanentlyFailWebhook(webhookId: String, payloadRef: String, error: String) { + dao.markAsPermanentlyFailed( id = webhookId, error = error ) + payloadStorage.delete(payloadRef) }You’ll also need to pass
webhook.payloadfromapp/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt:246-248.Also applies to: 90-105
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` around lines 63 - 65, The code deletes the stored payload by key before updating DB, causing leaked files or broken rows on DAO errors; modify completeWebhook (and the analogous failure method used at 90-105) to accept and propagate the stored payload ref (returned by WebhookPayloadStorage.save, e.g., the "file:<key>" value) instead of deleting by webhookId, call dao.markAsCompleted/markAsPermanentlyFailed first, and only call payloadStorage.delete(payloadRef) after the DAO update succeeds; update callers (notably WebhookQueueProcessorWorker where you currently pass webhookId at lines ~246-248) to pass webhook.payload (the stored payload ref) through into these repository methods.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt`:
- Around line 301-302: The current code in WebhookQueueProcessorWorker forces
every webhook.payload through payloadStorage.read(webhook.payload) which breaks
pre-upgrade rows that contain inline JSON; change the logic so that if
webhook.payload begins with the storage prefix (e.g., "file:") you call
payloadStorage.read(webhook.payload) and otherwise treat webhook.payload as the
inline JSON payload string and pass that directly to gson.fromJson; update the
block around payloadStorage.read(...) and gson.fromJson(...) (where
webhook.payload is used) to implement this fallback so old queue rows remain
readable.
---
Outside diff comments:
In `@app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt`:
- Around line 36-56: The AutoMigration(from = 19, to = 20) is insufficient
because webhook_queue.id changes type (INTEGER PK -> TEXT PK); update
AppDatabase to replace the bare AutoMigration(from = 19, to = 20) with an
explicit migration strategy: either add an AutoMigrationSpec implementation for
the 19→20 auto-migration or register a manual Migration object (e.g.,
MIGRATION_19_20) that performs a controlled table copy/transform (create new
table with TEXT primary key, copy/convert rows, drop old table, rename) and
include it in the database builder; ensure the migration targets the
webhook_queue.id column conversion and is exercised by a unit/instrumentation
migration test upgrading a v19 DB to v20.
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 20-37: enqueueWebhook currently generates the canonical string id
via NanoIdUtils.randomNanoId() and uses it to save the payload and construct the
WebhookQueueEntity, but returns the DAO insert result (Long) instead of that id;
change enqueueWebhook (the suspend function enqueueWebhook) to return the
generated id (the variable id) after a successful
dao.insertWebhook(WebhookQueueEntity(...)) call (or make the function Unit if
the return value is truly unused) so callers receive the string queue id that
matches the stored entity and payloadStorage.save(id, payload) usage.
---
Duplicate comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 63-65: The code deletes the stored payload by key before updating
DB, causing leaked files or broken rows on DAO errors; modify completeWebhook
(and the analogous failure method used at 90-105) to accept and propagate the
stored payload ref (returned by WebhookPayloadStorage.save, e.g., the
"file:<key>" value) instead of deleting by webhookId, call
dao.markAsCompleted/markAsPermanentlyFailed first, and only call
payloadStorage.delete(payloadRef) after the DAO update succeeds; update callers
(notably WebhookQueueProcessorWorker where you currently pass webhookId at lines
~246-248) to pass webhook.payload (the stored payload ref) through into these
repository methods.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3b73000e-24a9-4cc2-85b3-737cd6e72bae
📒 Files selected for processing (7)
app/schemas/me.capcom.smsgateway.data.AppDatabase/20.jsonapp/src/main/java/me/capcom/smsgateway/data/AppDatabase.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueEntity.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt
🚧 Files skipped from review as they are similar to previous changes (1)
- app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)
65-67:⚠️ Potential issue | 🟠 MajorPersist the terminal state before deleting the payload file.
Lines 66 and 103 delete first. If the DAO update fails afterward, the queue entry stays live but its backing payload is gone, so retries or stuck-worker recovery cannot re-read it.
🩹 Minimal fix
suspend fun completeWebhook(webhookId: String) { - payloadStorage.delete(webhookId) dao.markAsCompleted(id = webhookId) + payloadStorage.delete(webhookId) } @@ suspend fun permanentlyFailWebhook(webhookId: String, error: String) { - payloadStorage.delete(webhookId.toString()) dao.markAsPermanentlyFailed( id = webhookId, error = error ) + payloadStorage.delete(webhookId) }Also applies to: 102-107
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` around lines 65 - 67, completeWebhook currently deletes the payload via payloadStorage.delete(webhookId) before persisting the terminal state with dao.markAsCompleted(id = webhookId); change the order so you first call dao.markAsCompleted(...) and only upon successful completion delete the payload. Ensure the dao.markAsCompleted call is awaited/checked for success and handle/propagate errors so payload is not removed on failure; apply the same change to the other method that also calls payloadStorage.delete before updating the DAO (i.e., any code paths using payloadStorage.delete + dao.* update).app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt (1)
20-21:⚠️ Potential issue | 🟠 MajorMake
getById()nullable.Line 21 cannot represent a missing row safely. This lookup is used on recovery paths, so
WebhookQueueEntity?is the right contract; otherwise callers can only discover "not found" via a runtime crash.🩹 Minimal fix
- suspend fun getById(id: String): WebhookQueueEntity + suspend fun getById(id: String): WebhookQueueEntity?Then update downstream callers to handle the null case explicitly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt` around lines 20 - 21, Change the DAO contract for the lookup to return a nullable entity so missing rows are represented safely: update the suspend function WebhookQueueDao.getById(id: String): WebhookQueueEntity to return WebhookQueueEntity? and then locate and update all downstream callers that invoke getById to handle the null case explicitly (e.g., recovery path branches or early returns) instead of assuming a non-null result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 24-38: The code saves the payload via payloadStorage.save(id,
payload) before inserting the DB row with dao.insertWebhook(...), so if
insertWebhook throws you get an orphaned file; fix by surrounding
dao.insertWebhook(WebhookQueueEntity(...)) with a try/catch that on any
exception calls payloadStorage.delete(payloadRef) (or the appropriate removal
method) to remove the saved payload and then rethrow the exception; use the
existing id/payloadRef from NanoIdUtils.randomNanoId()/payloadStorage.save to
identify the file and ensure cleanup happens before propagating the error.
---
Duplicate comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt`:
- Around line 20-21: Change the DAO contract for the lookup to return a nullable
entity so missing rows are represented safely: update the suspend function
WebhookQueueDao.getById(id: String): WebhookQueueEntity to return
WebhookQueueEntity? and then locate and update all downstream callers that
invoke getById to handle the null case explicitly (e.g., recovery path branches
or early returns) instead of assuming a non-null result.
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 65-67: completeWebhook currently deletes the payload via
payloadStorage.delete(webhookId) before persisting the terminal state with
dao.markAsCompleted(id = webhookId); change the order so you first call
dao.markAsCompleted(...) and only upon successful completion delete the payload.
Ensure the dao.markAsCompleted call is awaited/checked for success and
handle/propagate errors so payload is not removed on failure; apply the same
change to the other method that also calls payloadStorage.delete before updating
the DAO (i.e., any code paths using payloadStorage.delete + dao.* update).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e9f741ed-2a2b-42ca-94dd-bf066d0a4bcc
📒 Files selected for processing (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)
24-28:⚠️ Potential issue | 🔴 CriticalCritical: Duplicate
payloadStorage.save()call leaks files.
payloadStorage.save(id, payload)is called twice—once at line 25 (outside try) and again at line 28 (inside try). The outerpayloadRefis shadowed and never used, so a file is written and immediately orphaned on every enqueue. Remove lines 25-26.🐛 Proposed fix
): String { val id = NanoIdUtils.randomNanoId() - val payloadRef = payloadStorage.save(id, payload) - try { val payloadRef = payloadStorage.save(id, payload) dao.insertWebhook(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` around lines 24 - 28, The code calls payloadStorage.save(id, payload) twice which creates orphaned files; remove the redundant call so payload is saved exactly once inside the try block. Specifically, delete the outer val payloadRef = payloadStorage.save(id, payload) and ensure the single payloadRef declared inside the try (from payloadStorage.save(id, payload)) is the one used/returned by the enclosing method (e.g., in WebhookQueueRepository functions that call NanoIdUtils.randomNanoId() and payloadStorage.save). Also ensure you don't shadow payloadRef and that any subsequent logic references the try-block payloadRef.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 39-42: The catch block is deleting by the original id instead of
the storage reference returned by payloadStorage.save; capture the returned
payloadRef from payloadStorage.save(...) before calling the DAO (e.g., assign
payloadRef = payloadStorage.save(...)), pass that payloadRef into the DAO insert
call if needed, and in the catch use payloadStorage.delete(payloadRef) (not
delete(id)) so the actual stored file (including any transformed reference like
"file:...") is removed; update WebhookQueueRepository methods that call
payloadStorage.save and payloadStorage.delete accordingly.
---
Duplicate comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 24-28: The code calls payloadStorage.save(id, payload) twice which
creates orphaned files; remove the redundant call so payload is saved exactly
once inside the try block. Specifically, delete the outer val payloadRef =
payloadStorage.save(id, payload) and ensure the single payloadRef declared
inside the try (from payloadStorage.save(id, payload)) is the one used/returned
by the enclosing method (e.g., in WebhookQueueRepository functions that call
NanoIdUtils.randomNanoId() and payloadStorage.save). Also ensure you don't
shadow payloadRef and that any subsequent logic references the try-block
payloadRef.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 74d4950e-cd61-49d9-b892-c02dad543610
📒 Files selected for processing (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt
There was a problem hiding this comment.
♻️ Duplicate comments (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (2)
38-40:⚠️ Potential issue | 🟠 MajorDelete the saved payload by
payloadRef, not the webhook id.
WebhookQueueEntity.payloadstores the reference returned bypayloadStorage.save(...). IfinsertWebhook()throws,delete(id)will miss afile:...ref and leave the file orphaned.🩹 Minimal fix
} catch (e: Exception) { - payloadStorage.delete(id) + payloadStorage.delete(payloadRef) throw e }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` around lines 38 - 40, In the catch block inside WebhookQueueRepository (around the insertWebhook flow) replace the call to payloadStorage.delete(id) with deletion using the saved payload reference (the value returned by payloadStorage.save, e.g., payloadRef or WebhookQueueEntity.payload) so the stored file ref (file:...) is removed if insertWebhook() throws; ensure you reference the same variable used when calling payloadStorage.save(...) rather than the webhook id.
70-72:⚠️ Potential issue | 🟠 MajorUse the stored payload ref here too, and delete it only after the DAO update succeeds.
These paths currently delete
webhookId, which cannot resolve the storedfile:...ref. That leaks files today, and the pre-update delete ordering is still risky because a successful cleanup plus failed DAO write would leave a live row without its payload.🩹 Minimal fix
- suspend fun completeWebhook(webhookId: String) { - payloadStorage.delete(webhookId) - dao.markAsCompleted(id = webhookId) + suspend fun completeWebhook(webhookId: String, payloadRef: String) { + dao.markAsCompleted(id = webhookId) + payloadStorage.delete(payloadRef) } @@ permanentlyFailWebhook( webhookId = webhookId, + payloadRef = webhook.payload, error = error ?: "Max retries exceeded", ) @@ - suspend fun permanentlyFailWebhook(webhookId: String, error: String) { - payloadStorage.delete(webhookId.toString()) + suspend fun permanentlyFailWebhook(webhookId: String, payloadRef: String, error: String) { dao.markAsPermanentlyFailed( id = webhookId, error = error ) + payloadStorage.delete(payloadRef) }Also applies to: 97-100, 107-112
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` around lines 70 - 72, In completeWebhook (and the other similar methods around the other occurrences), read the stored payload reference from the DAO/row (the file:... ref) and use that when deleting from payloadStorage instead of passing webhookId; also move the delete to after the DAO update so the sequence is: fetch payloadRef from the record, call dao.markAsCompleted/markAsFailed/etc. with the id, then if the DAO update succeeded call payloadStorage.delete(payloadRef); update code paths that currently call payloadStorage.delete(webhookId) to use the retrieved payloadRef and ensure deletion happens post-DAO-write to avoid leaks or orphaned rows (refer to payloadStorage and dao.markAsCompleted/markAsFailed/whatever methods to locate the spots).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 38-40: In the catch block inside WebhookQueueRepository (around
the insertWebhook flow) replace the call to payloadStorage.delete(id) with
deletion using the saved payload reference (the value returned by
payloadStorage.save, e.g., payloadRef or WebhookQueueEntity.payload) so the
stored file ref (file:...) is removed if insertWebhook() throws; ensure you
reference the same variable used when calling payloadStorage.save(...) rather
than the webhook id.
- Around line 70-72: In completeWebhook (and the other similar methods around
the other occurrences), read the stored payload reference from the DAO/row (the
file:... ref) and use that when deleting from payloadStorage instead of passing
webhookId; also move the delete to after the DAO update so the sequence is:
fetch payloadRef from the record, call dao.markAsCompleted/markAsFailed/etc.
with the id, then if the DAO update succeeded call
payloadStorage.delete(payloadRef); update code paths that currently call
payloadStorage.delete(webhookId) to use the retrieved payloadRef and ensure
deletion happens post-DAO-write to avoid leaks or orphaned rows (refer to
payloadStorage and dao.markAsCompleted/markAsFailed/whatever methods to locate
the spots).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bd892798-905f-4f65-979c-4b28740ac9be
📒 Files selected for processing (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt
30ca877 to
e9cb23c
Compare
🤖 Pull request artifacts
|
de9c5f5 to
03eab39
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (2)
107-113:⚠️ Potential issue | 🟠 MajorSame ordering issue and redundant
.toString().
- Delete should happen after
dao.markAsPermanentlyFailed()succeeds to avoid orphaned rows without payloads.webhookIdis alreadyString, so.toString()is redundant.🩹 Proposed fix
suspend fun permanentlyFailWebhook(webhookId: String, error: String) { - payloadStorage.delete(webhookId.toString()) dao.markAsPermanentlyFailed( id = webhookId, error = error ) + payloadStorage.delete(webhookId) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` around lines 107 - 113, In permanentlyFailWebhook, reverse the operations so dao.markAsPermanentlyFailed(...) is called first and only on success call payloadStorage.delete(...); also remove the redundant webhookId.toString() and pass webhookId directly to both dao.markAsPermanentlyFailed (id) and payloadStorage.delete so you don't delete the payload prematurely or use unnecessary conversions.
84-84:⚠️ Potential issue | 🟠 Major
dao.getById()will throw an exception if the webhook is deleted betweenstartProcessingand this call.The return type
WebhookQueueEntityis non-nullable. If the webhook row is removed (e.g., by cleanup or a race condition) betweenstartProcessing(line 64) and this call, Room will throw an exception since it cannot return null for a non-nullable type, and there's no error handling in place. Change the return type toWebhookQueueEntity?and handle the null case gracefully.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` at line 84, The DAO call dao.getById(webhookId) currently expects a non-null WebhookQueueEntity and will throw if the row was deleted after startProcessing; change the DAO method signature to return WebhookQueueEntity? (nullable) and update WebhookQueueRepository's code that calls getById (the startProcessing flow) to handle a null result gracefully—e.g., detect null, log/metrics and abort processing or mark the job as finished—so that deleted rows do not cause an exception; reference: dao.getById, WebhookQueueEntity, startProcessing, WebhookQueueRepository.
♻️ Duplicate comments (1)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)
70-73:⚠️ Potential issue | 🟠 MajorDelete payload after successful DB update, not before.
If
dao.markAsCompleted()throws after the payload is deleted, the queue row remains with statusprocessingbut its payload file is gone. Subsequent retry attempts will fail to read the payload, leaving the webhook in a corrupted state until it becomes permanently failed.🩹 Proposed fix
suspend fun completeWebhook(webhookId: String) { - payloadStorage.delete(webhookId) dao.markAsCompleted(id = webhookId) + payloadStorage.delete(webhookId) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` around lines 70 - 73, The current completeWebhook function deletes the payload before updating the DB which can leave the queue row stuck in "processing" without its payload if dao.markAsCompleted(id = webhookId) throws; change the ordering so dao.markAsCompleted(webhookId) is executed first and only call payloadStorage.delete(webhookId) after it returns successfully, and optionally wrap dao.markAsCompleted in a try/catch to avoid deleting on failure (or rethrow after ensuring payload remains); references: completeWebhook, payloadStorage.delete, dao.markAsCompleted.
🧹 Nitpick comments (2)
app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt (1)
120-122: Deletion by ID is correct, but consider transaction safety.The IDs returned by
getOldEntryIdsare NanoIds (the storage keys), sopayloadStorage.delete(it)correctly removes the files. However, ifdao.cleanupOldEntries()fails mid-way, some files may be deleted while their DB rows persist, or vice versa if new entries are added between the two calls. For robustness, consider wrapping in a transaction or accepting eventual consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt` around lines 120 - 122, The current sequence in WebhookQueueRepository (calling dao.getOldEntryIds, payloadStorage.delete for each id, then dao.cleanupOldEntries) can leave storage and DB out-of-sync if dao.cleanupOldEntries fails; wrap the delete operations in a DB transaction or switch to a two-phase approach: begin a transaction in the DAO (or add a dao.markEntriesForDeletion method), fetch and mark the entries (e.g., set a pending_delete flag) via WebhookQueueRepository, commit the DB change, then delete payloadStorage files for the marked NanoId keys and finally remove rows with dao.cleanupMarkedEntries (or perform deletions inside the same transaction if your storage supports transactional semantics) so that functions dao.getOldEntryIds, dao.cleanupOldEntries, and payloadStorage.delete are coordinated and atomic.app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt (1)
371-373: HTTP client closed after every request creates unnecessary overhead.The
client.close()in thefinallyblock means a newHttpClientinstance is created and destroyed for each webhook. This is inefficient when processing batches. Consider making the client a class-level property with lazy initialization and closing it in a cleanup method, or reusing across the batch.♻️ Suggested improvement
- private val client - get() = HttpClient(OkHttp) { + private val client by lazy { + HttpClient(OkHttp) { install(HttpTimeout) { ... } ... } + }Then remove the
client.close()from thefinallyblock, or close it once at worker completion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt` around lines 371 - 373, The HttpClient is being closed in the finally block for each webhook which forces recreation per request; move the client to a class-level property in WebhookQueueProcessorWorker (e.g., a private lateinit or lazy val named client) so it is created once and reused across batch processing (the code that currently constructs the local client and calls client.close() in the finally should stop doing that), remove the per-request client.close() call, and add a single cleanup/shutdown method on WebhookQueueProcessorWorker (or hook into the worker lifecycle) to close the client once when the worker completes or is disposed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 107-113: In permanentlyFailWebhook, reverse the operations so
dao.markAsPermanentlyFailed(...) is called first and only on success call
payloadStorage.delete(...); also remove the redundant webhookId.toString() and
pass webhookId directly to both dao.markAsPermanentlyFailed (id) and
payloadStorage.delete so you don't delete the payload prematurely or use
unnecessary conversions.
- Line 84: The DAO call dao.getById(webhookId) currently expects a non-null
WebhookQueueEntity and will throw if the row was deleted after startProcessing;
change the DAO method signature to return WebhookQueueEntity? (nullable) and
update WebhookQueueRepository's code that calls getById (the startProcessing
flow) to handle a null result gracefully—e.g., detect null, log/metrics and
abort processing or mark the job as finished—so that deleted rows do not cause
an exception; reference: dao.getById, WebhookQueueEntity, startProcessing,
WebhookQueueRepository.
---
Duplicate comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 70-73: The current completeWebhook function deletes the payload
before updating the DB which can leave the queue row stuck in "processing"
without its payload if dao.markAsCompleted(id = webhookId) throws; change the
ordering so dao.markAsCompleted(webhookId) is executed first and only call
payloadStorage.delete(webhookId) after it returns successfully, and optionally
wrap dao.markAsCompleted in a try/catch to avoid deleting on failure (or rethrow
after ensuring payload remains); references: completeWebhook,
payloadStorage.delete, dao.markAsCompleted.
---
Nitpick comments:
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.kt`:
- Around line 120-122: The current sequence in WebhookQueueRepository (calling
dao.getOldEntryIds, payloadStorage.delete for each id, then
dao.cleanupOldEntries) can leave storage and DB out-of-sync if
dao.cleanupOldEntries fails; wrap the delete operations in a DB transaction or
switch to a two-phase approach: begin a transaction in the DAO (or add a
dao.markEntriesForDeletion method), fetch and mark the entries (e.g., set a
pending_delete flag) via WebhookQueueRepository, commit the DB change, then
delete payloadStorage files for the marked NanoId keys and finally remove rows
with dao.cleanupMarkedEntries (or perform deletions inside the same transaction
if your storage supports transactional semantics) so that functions
dao.getOldEntryIds, dao.cleanupOldEntries, and payloadStorage.delete are
coordinated and atomic.
In
`@app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt`:
- Around line 371-373: The HttpClient is being closed in the finally block for
each webhook which forces recreation per request; move the client to a
class-level property in WebhookQueueProcessorWorker (e.g., a private lateinit or
lazy val named client) so it is created once and reused across batch processing
(the code that currently constructs the local client and calls client.close() in
the finally should stop doing that), remove the per-request client.close() call,
and add a single cleanup/shutdown method on WebhookQueueProcessorWorker (or hook
into the worker lifecycle) to close the client once when the worker completes or
is disposed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: dae6d6ab-5810-4fa1-88ca-032b6f7d6f64
📒 Files selected for processing (9)
app/schemas/me.capcom.smsgateway.data.AppDatabase/20.jsonapp/src/main/java/me/capcom/smsgateway/data/AppDatabase.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/Module.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueEntity.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueRepository.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.ktapp/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/WebhookQueueProcessorWorker.kt
✅ Files skipped from review due to trivial changes (4)
- app/src/main/java/me/capcom/smsgateway/modules/webhooks/Module.kt
- app/src/main/java/me/capcom/smsgateway/modules/webhooks/workers/SendWebhookWorker.kt
- app/src/main/java/me/capcom/smsgateway/data/AppDatabase.kt
- app/schemas/me.capcom.smsgateway.data.AppDatabase/20.json
🚧 Files skipped from review as they are similar to previous changes (3)
- app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueEntity.kt
- app/src/main/java/me/capcom/smsgateway/modules/webhooks/WebhookPayloadStorage.kt
- app/src/main/java/me/capcom/smsgateway/modules/webhooks/db/WebhookQueueDao.kt
Motivation
CursorWindow/SQLiteBlobTooBigExceptionby moving webhook payloads out of row materialization and loading them lazily.Description
WebhookPayloadStoragewhich stores webhook JSON payloads tocontext.filesDir/webhook_payloadsand exposessave,read, anddeletemethods; payload references use thefile:prefix.WebhookPayloadStoragein the DI module (webhooksModule).SendWebhookWorker.startto save the serialized payload withWebhookPayloadStorage.save(...)and enqueue the returned file reference instead of inline JSON.WebhookQueueRepository.getByIdaccessor used to fetch full queue rows when needed.WebhookQueueProcessorWorkerto injectWebhookPayloadStorage, lazilyreadthe payload for each webhook before sending, anddeletethe stored file on successful send or when a webhook becomes permanently failed; updated failure handling to fetch the latest row status withgetById.docs/plans/mms-large-attachments-plan.mddescribing the problem, long-term migration options, and test/rollout plans for large MMS attachment handling.Testing
Codex Task
Summary by CodeRabbit
New Features
Bug Fixes
Improvements