fix(telegram): recover sticky fetch fallback after transient failures#77157
Conversation
|
Codex review: needs real behavior proof before merge. Summary Reproducibility: yes. at source level: current main only promotes Real behavior proof Next step before merge Security Review detailsBest possible solution: Land the focused Telegram transport recovery after redacted real-runtime proof is added, leaving pool-size, fallback-IP, and config-knob tuning as separate decisions. Do we have a high-confidence way to reproduce the issue? Yes at source level: current main only promotes Is this the best way to solve the issue? Yes for the code direction: a bounded primary recovery probe inside the Telegram plugin transport is narrower than adding config knobs or changing fallback IP policy. The PR is not merge-ready until non-mock after-fix runtime proof is supplied. Acceptance criteria:
What I checked:
Likely related people:
Remaining risk / open question:
Codex review notes: model gpt-5.5, reasoning high; reviewed against 4e983aa57b8b. |
bf029b9 to
e78ef27
Compare
e78ef27 to
1bf2801
Compare
|
Landed via rebase onto main.
Thanks @MkDev11! |
Summary
Change Type
Scope
Linked Issue/PR
Root Cause
stickyAttemptIndexwas monotonic and had no success-path recovery logic.Regression Test Plan
extensions/telegram/src/fetch.test.tsUser-visible / Behavior Changes
Telegram transport can recover from sticky IPv4/pinned-IP fallback without restarting the gateway after the primary path becomes healthy again.
Diagram
Security Impact (required)
Yes, explain risk + mitigation: N/ARepro + Verification
Environment
Steps
Expected
Actual
Evidence
Human Verification
What you personally verified (not just CI), and how:
Review Conversations
Compatibility / Migration
Risks and Mitigations
Real behavior proof
pnpm test extensions/telegram/src/fetch.test.ts