feat(recovery-events): add revenue recovery topic and vector config to push these events to s3#8285
Merged
likhinbopanna merged 139 commits intomainfrom Jul 25, 2025
Merged
Conversation
…tion_monitoring_feilds
tsdk02
previously approved these changes
Jul 14, 2025
6c72941
srujanchikke
previously approved these changes
Jul 17, 2025
tsdk02
previously approved these changes
Jul 17, 2025
jarnura
previously approved these changes
Jul 17, 2025
3bc3df2
srujanchikke
approved these changes
Jul 24, 2025
jarnura
approved these changes
Jul 25, 2025
tsdk02
approved these changes
Jul 25, 2025
pixincreate
added a commit
that referenced
this pull request
Jul 28, 2025
…rver * 'main' of github.com:juspay/hyperswitch: (24 commits) chore(version): 2025.07.28.1 feat(core): Hyperswitch <|> UCS Mandate flow integration (#8738) feat(themes): Create user APIs for managing themes (#8387) chore: update devDependencies for cypress (#8735) refactor: Add routing_approach other variant to handle unknown data (#8754) chore(version): 2025.07.28.0 refactor(connector): [facilitapay] move destination bank account number to connector metadata (#8704) feat(recovery-events): add revenue recovery topic and vector config to push these events to s3 (#8285) ci(cypress): add authorizedotnet connector (#8688) refactor(schema): add a new column for storing large customer user agents in mandate table (#8616) feat(authentication): add authentication api for modular authentication (#8459) feat(connector): [MPGS] template code (#8544) fix(chat): append request id to headers for chat request (#8680) feat(connector): [Flexiti]template code for flexiti connector (#8714) chore(version): 2025.07.25.0 feat(core): Consuming locale in PaymentsAuthorizeData from SessionState (#8731) fix(payment-methods): fetch payment method details in payouts flow (#8729) refactor(core): remove hardcoded timeout limit of 5s for outgoing webhook requests (#8725) feat(connector): [Breadpay]Add support for Breadpay connector (#8676) fix(feature_matrix): refunds are supported by jpmorgan (#8699) ...
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Type of Change
Description
This new Kafka Event consume the revenue recovery related events which will be constructed from both
RecoveryPaymentIntentandRecoveryPaymentAttemptin revenue recovery flow. The events form Kafka will be picked up by vector and this be eventually batched and pushed to s3 based on the set config. They file path will look like thismerchant_id/Year/month/timestamp.csv.The events will be recorded at two places.
Webhook Flow(External Payments done by billing processor) andInternal Proxy Flow(Internal Done by Hyperswitch as a part of retrying). The event structure is mentioned in the kafka message in code.The estimated size of each event is around 1000bytes. We want to keep the config at 1000 batches and one day(86400s) timeout in vector. So that vector collects all the events till the end of the time out or till it hits 1000 events and push them to s3 file in the designated file path in a csv format in the order mentioned in the config. Based on the above config vector needs 10mb size buffer to support this. S3 auth will be done using the IAM Instance profile which will be taken care off at the time of the deployment.
Additional Changes
Motivation and Context
As for the Revenue Recovery System we needed a pipeline which can move transaction based data to s3 where we store the entire transactional data which can be further used to train models. The transactional data consists of various parameters from both payment intent and payment attempt. So we created a new topic to facilitate this requirement.
How did you test it?
At the time of this pr getting merged, Kafka is not enabled for V2 . So we cannot test it.

It can be tested by the following steps once kafka is enabled for v2.
Follow the steps in this pr #7461 and check the s3 attached to that kafka topic to find a file after waiting for one day.
Sample Kafka Event :
Here is the S3 File Path:

Here is the AWS File sample

Checklist
cargo +nightly fmt --allcargo clippy