fix(nextjs): Don't share I/O resources in between requests#6980
fix(nextjs): Don't share I/O resources in between requests#6980
Conversation
size-limit report 📦
|
AbhiPrasad
left a comment
There was a problem hiding this comment.
We could also just update makePromiseBuffer to just clear on drain right? And just add an option to pass into the constructor?
| * We need this in the edge runtime because edge function invocations may not share I/O objects, like fetch requests | ||
| * and responses, and the normal PromiseBuffer inherently buffers stuff inbetween incoming requests. | ||
| */ | ||
| class IsolatedPromiseBuffer { |
There was a problem hiding this comment.
l: should this implement the PromiseBuffer interface?
There was a problem hiding this comment.
It is not exported - should we do that?
There was a problem hiding this comment.
Ah maybe we just leave it then, that's more of a change then I thought.
| // If we ever remove it from the interface we should also remove it here. | ||
| public $: Array<PromiseLike<TransportMakeRequestResponse>> = []; | ||
|
|
||
| private _taskProducers: (() => PromiseLike<TransportMakeRequestResponse>)[] = []; |
There was a problem hiding this comment.
l: Why do we use _taskProducers instead of $? $ is only used by tests iirc.
There was a problem hiding this comment.
They're of a different type. We need to store producers in this case because we do not want to actually create the promises right away. Producers are () => Promise whereas $ is just an array of Promises.
There was a problem hiding this comment.
I see - so we are essentially making all outgoing fetch requests lazy, because we want to only send them when drain is called.
I guess this works for now, but there is always the risk that data does not get sent to Sentry because of this right? This is an important caveat that should be put in as a code comment, that we knew about this tradeoff but we still proceeded with this.
There was a problem hiding this comment.
Generally, we flush on every edge function invocation so we should be sending everything. This implementation has the drawback though that there is a hard cap on the amount of events sent for any given edge function invocation: https://github.com/getsentry/sentry-javascript/pull/6980/files#diff-750cbb0cd6b1f70c9ca7a215d0e9ef1e0268b822bf194ccf49feddb7a6341d64R19-R20
| } | ||
|
|
||
| return createTransport(options, makeRequest); | ||
| return createTransport(options, makeRequest, new IsolatedPromiseBuffer(options.bufferSize)); |
There was a problem hiding this comment.
m: If we export IsolatedPromiseBuffer, we can add unit tests for drain, I think we should have that to prevent regressions.
If we can avoid bloating the API for this fix I would probably do so. We need to rethink this part of the SDK anyhow when we build a runtime-agnostic one. Do you have a stronger opinion here? Edit: Thought about this more, doing that will not fix the problem. Two request handlers may write to that promise buffer and one flushes both fetch requests - again sharing the response. |
AbhiPrasad
left a comment
There was a problem hiding this comment.
Thanks for the tests!
| // If we ever remove it from the interface we should also remove it here. | ||
| public $: Array<PromiseLike<TransportMakeRequestResponse>> = []; | ||
|
|
||
| private _taskProducers: (() => PromiseLike<TransportMakeRequestResponse>)[] = []; |
There was a problem hiding this comment.
I see - so we are essentially making all outgoing fetch requests lazy, because we want to only send them when drain is called.
I guess this works for now, but there is always the risk that data does not get sent to Sentry because of this right? This is an important caveat that should be put in as a code comment, that we knew about this tradeoff but we still proceeded with this.
Fixes #6975
Vercel edge routes don't allow shared access to I/O resources (req, res objects etc.) between requests. Our flushing logic inside the default PromiseBuffer was inherently doing that by processing outgoing fetch requests, even after the Edge route terminated.
This PR aims to resolve this issue by making the flushing of Sentry events atomic to a particular request.