feat: enable TUS upload support via Supabase Storage proxy#1488
feat: enable TUS upload support via Supabase Storage proxy#1488
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughAdds a Supabase Storage TUS proxy and routes POST/HEAD/PATCH upload traffic to new TUS handlers in non‑workerd runtimes; updates upload ID extraction for base64 TUS IDs, enables TUS in non‑workerd config, adds end‑to‑end TUS tests, and minor CLI/test cwd and delete-error handling tweaks. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Router as Upload Router
participant TUSProxy as TUS Proxy Handler
participant Supabase as Supabase Storage
rect rgba(100,150,200,0.5)
Note over Client,Router: TUS Upload Creation Flow
Client->>Router: POST /files/upload/... (filename, Upload-Length, metadata)
Router->>TUSProxy: supabaseTusCreateHandler()
TUSProxy->>Supabase: POST to Supabase TUS (transformed metadata + headers)
Supabase-->>TUSProxy: 201 Created + Location
TUSProxy->>TUSProxy: Rewrite Location to external URL
TUSProxy-->>Client: 201 Created (Location + TUS headers)
end
rect rgba(150,200,100,0.5)
Note over Client,Router: TUS Chunk Upload Flow
Client->>Router: PATCH /files/upload/.../:id (chunk, Upload-Offset)
Router->>TUSProxy: supabaseTusPatchHandler()
TUSProxy->>Supabase: PATCH to Supabase TUS (chunk + offset headers)
Supabase-->>TUSProxy: 204 No Content (Upload-Offset)
TUSProxy-->>Client: 204 No Content (Upload-Offset + TUS headers)
end
rect rgba(200,150,100,0.5)
Note over Client,Router: TUS Progress Check Flow
Client->>Router: HEAD /files/upload/.../:id
Router->>TUSProxy: supabaseTusHeadHandler()
TUSProxy->>Supabase: HEAD to Supabase TUS
Supabase-->>TUSProxy: 200 OK (Upload-Offset, Upload-Length)
TUSProxy-->>Client: 200 OK (Upload-Offset, Upload-Length)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In `@supabase/functions/_backend/files/supabaseTusProxy.ts`:
- Around line 236-241: The HEAD handler's fetch call (in supabaseTusHeadHandler)
needs the same timeout and error handling as the create handler: wrap the
fetch(supabaseUrl, { method: 'HEAD', headers }) in a try/catch, use an
AbortController with a short timeout and clear the timer, catch and log errors
via cloudlog including requestId/error, and handle non-2xx responses by logging
and returning an appropriate response (e.g., propagate status or a 504/502)
instead of letting the request hang or fail silently.
- Around line 194-200: The PATCH handler's fetch call (where response is
assigned) needs the same timeout and error handling as the create handler: wrap
the fetch(...) that uses c.req.raw.body in a try/catch, use an AbortController
to enforce a timeout and clear its timer after fetch completes, catch and log
errors with cloudlog including requestId and the error, and return an
appropriate error response if the fetch fails or times out; update references to
response and any downstream logic to handle the case where fetch threw. Ensure
the unique symbols mentioned—response, cloudlog, c.req.raw.body—are used to
locate and modify the PATCH handler code.
- Around line 44-85: In rewriteLocationHeader validate the extracted uploadId
(from supabaseLocation.split('/').pop()) before using it: compute uploadId, and
if it's falsy or empty, call cloudlog with the requestId and supabaseLocation to
record the malformed URL and then throw an Error (or return the original
supabaseLocation) to avoid returning a bad Location header; ensure the cloudlog
message includes context and that the subsequent return only runs when uploadId
is a non-empty string.
- Around line 134-141: The fetch to supabaseUrl in the supabaseTusCreateHandler
lacks timeout and error handling—wrap the upstream request in a try/catch and
use an AbortController with a sensible timeout (e.g., 5–15s) to abort slow
requests; on error or timeout, log the error via cloudlog including requestId
and response.status (or error.message) and return a controlled error response to
the caller instead of letting the request hang or crash; update the fetch call
that uses headers and c.req.raw.body to use the controller.signal and ensure the
controller is cleared on success.
In `@tests/tus-upload.test.ts`:
- Around line 104-106: The ESLint rule requires describe block titles to start
with a lowercase letter; update every describe invocation in
tests/tus-upload.test.ts (e.g., the top-level describe currently titled "TUS
Upload Protocol Tests") to use a title that begins with a lowercase character
(for example change "TUS Upload Protocol Tests" to "tus upload protocol tests"),
and apply the same lowercasing convention to all nested describe(...) strings in
the file so they all comply with test/prefer-lowercase-title.
- Around line 117-124: The test currently treats the JSON result as unknown,
causing type errors when accessing config.TUSUpload and config.maxUploadLength;
fix it by asserting the response JSON to the expected shape (e.g. define or
inline a type/interface with TUSUpload: boolean and maxUploadLength: number) and
cast the result of response.json() to that type (for example use "const config =
await response.json() as { TUSUpload: boolean; maxUploadLength: number }") so
the test can access config.TUSUpload and config.maxUploadLength without
TypeScript errors.
- Around line 56-66: The fetch call using variable "data" passes a Uint8Array
into the request body which causes TS2769; update the PATCH request to wrap the
Uint8Array in a Blob (e.g., new Blob([data])) or pass data.buffer (ArrayBuffer)
so the Fetch API accepts it—modify the body in the fetch invocation where "data"
is used (the PATCH to uploadUrl with headers 'Tus-Resumable', 'Upload-Offset',
etc.) to use a Blob or ArrayBuffer instead.
- Around line 16-17: Remove the unused constant declarations SUPABASE_URL and
SUPABASE_SERVICE_KEY from tests/tus-upload.test.ts (they are declared but never
referenced), so delete the lines that define SUPABASE_URL and
SUPABASE_SERVICE_KEY (or replace them with underscored names if you intend to
keep them for future use) to resolve the TS6133 unused variable compilation
errors.
🧹 Nitpick comments (2)
supabase/functions/_backend/files/supabaseTusProxy.ts (1)
1-4: Consider using typed Context for consistency with coding guidelines.The handlers use
Contextfrom Hono, but the coding guidelines specify that handlers should acceptContext<MiddlewareKeyVariables>. This provides type safety forc.get('requestId'),c.get('fileId'), etc.Proposed type import change
-import type { Context } from 'hono' +import type { Context } from 'hono' +import type { MiddlewareKeyVariables } from '../utils/hono.ts'Then update handler signatures:
-export async function supabaseTusCreateHandler(c: Context): Promise<Response> { +export async function supabaseTusCreateHandler(c: Context<MiddlewareKeyVariables>): Promise<Response> {Based on coding guidelines for
supabase/functions/_backend/**/*.{ts,js}.supabase/functions/_backend/files/files.ts (1)
285-316: Consider adding a comment to the empty catch block.The empty catch block at lines 314-316 silently handles non-base64 strings, which is intentional. Adding a brief comment clarifies this is expected behavior.
Proposed improvement
} catch { - // Not a base64 string, use as-is + // Not a valid base64 string or not a Supabase TUS ID - use the ID as-is + // This is expected for non-TUS requests or already-decoded IDs }
There was a problem hiding this comment.
Pull request overview
This pull request adds TUS (resumable upload) protocol support for self-hosted Capgo deployments by implementing a proxy to Supabase Storage's TUS endpoint. The implementation enables chunked, resumable file uploads through POST/PATCH/HEAD handlers with proper URL rewriting and metadata transformation.
Changes:
- Implements Supabase Storage TUS proxy handlers for POST (create upload), PATCH (upload chunks), and HEAD (check progress) operations
- Updates files config endpoint to return
TUSUpload: truefor non-workerd runtimes (Supabase Edge Functions) - Routes TUS requests through authentication and authorization middleware before proxying to Supabase Storage
- Adds comprehensive test suite covering TUS protocol operations, resumable uploads, and error scenarios
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 11 comments.
| File | Description |
|---|---|
| tests/tus-upload.test.ts | Comprehensive test suite for TUS upload protocol including OPTIONS discovery, POST/PATCH/HEAD operations, resumable scenarios, and error handling |
| supabase/functions/_backend/files/supabaseTusProxy.ts | Core TUS proxy implementation with handlers for creating uploads, uploading chunks, and checking progress, including URL and metadata transformation |
| supabase/functions/_backend/files/files_config.ts | Updated to return TUSUpload: true for Supabase Edge Functions runtime |
| supabase/functions/_backend/files/files.ts | Routing integration to direct TUS requests to proxy handlers on non-workerd runtimes, with TUS upload ID parsing logic |
| app.get(`/upload/${ATTACHMENT_PREFIX}/:id{.+}`, middlewareKey(['all', 'write', 'upload'], true), setKeyFromIdParam, checkWriteAppAccess, async (c) => { | ||
| // Detect TUS HEAD request (has TUS header and is HEAD method) | ||
| const isTusRequest = c.req.header('Tus-Resumable') != null | ||
| const isHead = c.req.method === 'HEAD' | ||
|
|
||
| if (isHead && isTusRequest && getRuntimeKey() !== 'workerd') { | ||
| cloudlog({ requestId: c.get('requestId'), message: 'Routing HEAD TUS request to supabaseTusHeadHandler' }) | ||
| return supabaseTusHeadHandler(c) | ||
| } | ||
|
|
||
| // Normal GET handler | ||
| return getHandler(c) | ||
| }) |
There was a problem hiding this comment.
The TUS HEAD request detection happens after the setKeyFromIdParam and checkWriteAppAccess middlewares have already executed. For regular GET requests (non-TUS), this is fine. However, for TUS HEAD requests, these middlewares perform unnecessary work parsing and validating the TUS upload ID format (lines 285-316 in setKeyFromIdParam). While this doesn't break functionality, it adds overhead. Consider detecting TUS requests earlier in the middleware chain, or ensure that the middleware logic is lightweight enough that this doesn't matter for performance.
| app.get(`/upload/${ATTACHMENT_PREFIX}/:id{.+}`, middlewareKey(['all', 'write', 'upload'], true), setKeyFromIdParam, checkWriteAppAccess, async (c) => { | |
| // Detect TUS HEAD request (has TUS header and is HEAD method) | |
| const isTusRequest = c.req.header('Tus-Resumable') != null | |
| const isHead = c.req.method === 'HEAD' | |
| if (isHead && isTusRequest && getRuntimeKey() !== 'workerd') { | |
| cloudlog({ requestId: c.get('requestId'), message: 'Routing HEAD TUS request to supabaseTusHeadHandler' }) | |
| return supabaseTusHeadHandler(c) | |
| } | |
| // Normal GET handler | |
| return getHandler(c) | |
| }) | |
| app.get( | |
| `/upload/${ATTACHMENT_PREFIX}/:id{.+}`, | |
| middlewareKey(['all', 'write', 'upload'], true), | |
| // Early TUS HEAD detection to avoid unnecessary ID parsing and access checks | |
| async (c, next) => { | |
| const isTusRequest = c.req.header('Tus-Resumable') != null | |
| const isHead = c.req.method === 'HEAD' | |
| if (isHead && isTusRequest && getRuntimeKey() !== 'workerd') { | |
| cloudlog({ requestId: c.get('requestId'), message: 'Routing HEAD TUS request to supabaseTusHeadHandler (early)' }) | |
| return supabaseTusHeadHandler(c) | |
| } | |
| return next() | |
| }, | |
| setKeyFromIdParam, | |
| checkWriteAppAccess, | |
| async (c) => { | |
| // Normal GET handler | |
| return getHandler(c) | |
| }, | |
| ) |
| app.get('/', (c) => { | ||
| if (getRuntimeKey() !== 'workerd') { | ||
| // Partial and TUS upload are only available on workerd | ||
| // TUS upload is now supported via Supabase Storage proxy |
There was a problem hiding this comment.
The comment states 'TUS upload is now supported via Supabase Storage proxy', but this only applies when getRuntimeKey() !== 'workerd'. For Cloudflare Workers (workerd runtime), TUS uploads use the Durable Object-based implementation (uploadHandler). Consider clarifying the comment to indicate this applies specifically to Supabase Edge Functions runtime.
| // TUS upload is now supported via Supabase Storage proxy | |
| // In Supabase Edge Functions runtime, TUS upload is supported via Supabase Storage proxy |
| const fileId = c.get('fileId') as string | ||
|
|
There was a problem hiding this comment.
The fileId is retrieved using c.get('fileId') as string, but if the setKeyFromMetadata middleware failed to set fileId (though it should return early with an error), this could result in an undefined value being passed to transformMetadataForSupabase. Consider adding a check to ensure fileId is defined before proceeding, or at minimum log an error if it's undefined.
| const fileId = c.get('fileId') as string | |
| const rawFileId = c.get('fileId') | |
| if (typeof rawFileId !== 'string' || rawFileId.length === 0) { | |
| cloudlog({ | |
| requestId, | |
| message: 'supabaseTusCreateHandler missing or invalid fileId in context', | |
| fileId: rawFileId, | |
| }) | |
| return new Response('Internal server error: missing fileId', { status: 500 }) | |
| } | |
| const fileId = rawFileId |
| // Store the original upload ID for the TUS handler | ||
| c.set('tusUploadId', normalizedFileId) |
There was a problem hiding this comment.
The variable 'tusUploadId' is set using c.set but is not declared in the MiddlewareKeyVariables interface in utils/hono.ts. While this may work at runtime, it creates a type safety issue. Since tusUploadId is never actually read (PATCH and HEAD handlers use c.req.param('id') directly), this c.set call appears to be unnecessary and can be removed. If it's intended for future use, it should be added to the MiddlewareKeyVariables interface.
| // Store the original upload ID for the TUS handler | |
| c.set('tusUploadId', normalizedFileId) |
| cloudlog({ | ||
| requestId: c.get('requestId'), | ||
| message: 'setKeyFromIdParam - detected Supabase TUS upload ID', | ||
| decoded, | ||
| }) | ||
| // Extract file path: remove bucket prefix (capgo/) and UUID suffix | ||
| // Format: capgo/orgs/xxx/apps/yyy/file.zip/uuid | ||
| const parts = decoded.split('/') | ||
| // Remove first part (bucket name) and last part (UUID) | ||
| const pathParts = parts.slice(1, -1) | ||
| extractedFileId = pathParts.join('/') | ||
| // Store the original upload ID for the TUS handler | ||
| c.set('tusUploadId', normalizedFileId) | ||
| cloudlog({ | ||
| requestId: c.get('requestId'), | ||
| message: 'setKeyFromIdParam - extracted fileId from TUS ID', | ||
| extractedFileId, | ||
| originalParts: parts, | ||
| pathParts, | ||
| }) |
There was a problem hiding this comment.
The TUS upload ID extraction logic assumes a specific format: 'capgo/orgs/xxx/apps/yyy/file.zip/uuid'. However, this parsing uses slice(1, -1) which removes the first and last parts. This is fragile if the UUID or file path structure changes. Consider using a more robust parsing approach that explicitly extracts the known parts rather than relying on positional slicing, or add validation to ensure the decoded string matches the expected format before attempting to extract the path.
| cloudlog({ | |
| requestId: c.get('requestId'), | |
| message: 'setKeyFromIdParam - detected Supabase TUS upload ID', | |
| decoded, | |
| }) | |
| // Extract file path: remove bucket prefix (capgo/) and UUID suffix | |
| // Format: capgo/orgs/xxx/apps/yyy/file.zip/uuid | |
| const parts = decoded.split('/') | |
| // Remove first part (bucket name) and last part (UUID) | |
| const pathParts = parts.slice(1, -1) | |
| extractedFileId = pathParts.join('/') | |
| // Store the original upload ID for the TUS handler | |
| c.set('tusUploadId', normalizedFileId) | |
| cloudlog({ | |
| requestId: c.get('requestId'), | |
| message: 'setKeyFromIdParam - extracted fileId from TUS ID', | |
| extractedFileId, | |
| originalParts: parts, | |
| pathParts, | |
| }) | |
| const parts = decoded.split('/') | |
| // Expected format: | |
| // [0]: 'capgo' | |
| // [1]: 'orgs' | |
| // [2]: orgId | |
| // [3]: 'apps' | |
| // [4]: appId | |
| // [5..n-2]: file path segments | |
| // [n-1]: UUID | |
| if ( | |
| parts.length >= 6 && | |
| parts[0] === 'capgo' && | |
| parts[1] === 'orgs' && | |
| parts[3] === 'apps' | |
| ) { | |
| cloudlog({ | |
| requestId: c.get('requestId'), | |
| message: 'setKeyFromIdParam - detected Supabase TUS upload ID', | |
| decoded, | |
| parts, | |
| }) | |
| // Extract file path: remove bucket prefix (capgo/) and UUID suffix | |
| // Resulting path starts with "orgs/..." | |
| const pathParts = parts.slice(1, parts.length - 1) | |
| if (pathParts.length > 0) { | |
| extractedFileId = pathParts.join('/') | |
| // Store the original upload ID for the TUS handler | |
| c.set('tusUploadId', normalizedFileId) | |
| cloudlog({ | |
| requestId: c.get('requestId'), | |
| message: 'setKeyFromIdParam - extracted fileId from TUS ID', | |
| extractedFileId, | |
| originalParts: parts, | |
| pathParts, | |
| }) | |
| } | |
| else { | |
| cloudlog({ | |
| requestId: c.get('requestId'), | |
| message: 'setKeyFromIdParam - TUS ID decoded but pathParts is empty, using normalizedFileId as fileId', | |
| decoded, | |
| parts, | |
| }) | |
| } | |
| } | |
| else { | |
| cloudlog({ | |
| requestId: c.get('requestId'), | |
| message: 'setKeyFromIdParam - decoded TUS ID has unexpected structure, using normalizedFileId as fileId', | |
| decoded, | |
| parts, | |
| }) | |
| } |
| cloudlog({ requestId, message: 'supabaseTusCreateHandler location rewritten', original: location, rewritten: rewrittenLocation }) | ||
| } | ||
|
|
||
| return new Response(null, { |
There was a problem hiding this comment.
The TUS proxy handlers always return a Response with null body, even when Supabase returns an error with a body (e.g., 400, 403, 500). While the status code is correctly forwarded, error details from Supabase (which might contain useful information for debugging) are lost. Consider forwarding the response body for error status codes (4xx, 5xx) to help with debugging and provide better error messages to clients.
| return new Response(null, { | |
| // Forward Supabase error body (if any) to the client for 4xx/5xx responses | |
| let responseBody: BodyInit | null = null | |
| if (response.status >= 400) { | |
| try { | |
| const textBody = await response.text() | |
| if (textBody) { | |
| responseBody = textBody | |
| const supabaseContentType = response.headers.get('Content-Type') | |
| if (supabaseContentType) | |
| responseHeaders.set('Content-Type', supabaseContentType) | |
| } | |
| } | |
| catch { | |
| // If reading the body fails, fall back to empty body | |
| } | |
| } | |
| return new Response(responseBody, { |
| responseHeaders.set(header, value) | ||
| } | ||
|
|
||
| return new Response(null, { |
There was a problem hiding this comment.
Similar to the POST handler, the PATCH handler returns a Response with null body even for error responses from Supabase. Consider forwarding error response bodies for better debugging and error reporting.
| return new Response(null, { | |
| // Forward error body from Supabase for better debugging, keep empty body on success | |
| let body: BodyInit | null = null | |
| if (!response.ok) { | |
| try { | |
| const text = await response.text() | |
| body = text || null | |
| } | |
| catch { | |
| body = null | |
| } | |
| } | |
| return new Response(body, { |
| responseHeaders.set(header, value) | ||
| } | ||
|
|
||
| return new Response(null, { |
There was a problem hiding this comment.
Similar to POST and PATCH handlers, the HEAD handler returns a Response with null body even for error responses from Supabase. Consider forwarding error response bodies for better debugging and error reporting.
| return new Response(null, { | |
| // For better debugging and error reporting, forward the Supabase error body | |
| // while keeping HEAD semantics (no body) for successful responses. | |
| let body: string | null = null | |
| if (!response.ok) { | |
| try { | |
| const text = await response.text() | |
| if (text) | |
| body = text | |
| } | |
| catch { | |
| // If reading the body fails, fall back to null body as before. | |
| body = null | |
| } | |
| } | |
| return new Response(body, { |
| // Extract uploadId from Supabase URL | ||
| const uploadId = supabaseLocation.split('/').pop() | ||
|
|
There was a problem hiding this comment.
The uploadId is extracted using supabaseLocation.split('/').pop(), which assumes the URL ends with the upload ID. This is fragile if the URL has a trailing slash or query parameters. Consider using a more robust extraction method, such as a regex or URL parsing, to ensure the upload ID is correctly extracted in all cases.
| // Extract uploadId from Supabase URL | |
| const uploadId = supabaseLocation.split('/').pop() | |
| // Extract uploadId from Supabase URL in a robust way (handle trailing slashes, query, fragment) | |
| let uploadId: string | undefined | |
| try { | |
| const url = new URL(supabaseLocation) | |
| const pathSegments = url.pathname.split('/').filter(Boolean) | |
| uploadId = pathSegments[pathSegments.length - 1] | |
| } | |
| catch { | |
| // Fallback for non-absolute URLs: strip query/fragment and get last non-empty segment | |
| const pathWithoutQuery = supabaseLocation.split('?')[0].split('#')[0] | |
| const pathSegments = pathWithoutQuery.split('/').filter(Boolean) | |
| uploadId = pathSegments[pathSegments.length - 1] | |
| } | |
| // In the unlikely case we still don't have an uploadId, default to empty string | |
| if (!uploadId) { | |
| cloudlog({ requestId, message: 'rewriteLocationHeader: could not determine uploadId from Supabase Location', supabaseLocation }) | |
| uploadId = '' | |
| } |
| else { | ||
| // Production self-hosted: SUPABASE_URL should be the external URL | ||
| baseUrl = supabaseUrl | ||
| } |
There was a problem hiding this comment.
The fallback logic for production self-hosted environments assumes SUPABASE_URL is the external URL (line 78-79). However, in production self-hosted Supabase deployments, SUPABASE_URL often points to the internal service URL (e.g., http://kong:8000 or similar), not the externally accessible URL. When X-Forwarded-Host is not present and the URL doesn't contain 'kong:8000', this would incorrectly use the internal URL in the Location header, making it inaccessible to clients. Consider adding an explicit EXTERNAL_URL environment variable for production deployments or logging a warning when falling back to SUPABASE_URL without forwarded headers.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1b51016f3f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const objectNameB64 = btoa(objectName) | ||
| const filenameB64 = btoa(objectName) |
There was a problem hiding this comment.
Use UTF-8-safe base64 for objectName metadata
This uses btoa(objectName)/btoa(filename) which throws InvalidCharacterError for non‑Latin1 input, but fileId originates from Upload-Metadata decoded via TextDecoder and can include UTF‑8. In self‑hosted TUS mode, any Unicode character in the app path or filename (e.g. localized filenames) will cause the create request to fail before reaching Supabase, whereas the existing R2 path accepts UTF‑8 metadata. Consider encoding with TextEncoder/toBase64 (or Buffer) to support Unicode safely.
Useful? React with 👍 / 👎.
| // Transform metadata to include bucket and object name | ||
| const transformedMetadata = transformMetadataForSupabase(fileId) | ||
| headers.set('Upload-Metadata', transformedMetadata) |
There was a problem hiding this comment.
Preserve filetype metadata when proxying Upload-Metadata
The proxy overwrites Upload-Metadata with only bucketName/objectName/filename, dropping client-provided metadata like filetype. The R2 TUS path uses uploadMetadata.filetype to set the stored object’s contentType (see uploadHandler.ts), so self‑hosted uploads will now lose MIME type information and default to application/octet-stream even when clients send it. Consider appending the original metadata (at least filetype) instead of replacing it.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/tus-upload.test.ts`:
- Around line 46-61: The TypeScript error occurs because new Blob([data]) uses a
Uint8Array directly; change uploadChunk to pass an ArrayBuffer to Blob instead
of the Uint8Array view: replace the Blob creation at new Blob([data]) with new
Blob([data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength)])
(or data.buffer if you know the buffer is tightly packed) so the BlobPart is an
ArrayBuffer and satisfies the type checker for the uploadChunk function.
993d89d to
333f666
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@supabase/functions/_backend/files/files.ts`:
- Around line 553-574: The early HEAD TUS branch returns supabaseTusHeadHandler
before running setKeyFromIdParam and checkWriteAppAccess, letting anyone with
any upload key probe other apps; ensure app-level access is enforced by running
setKeyFromIdParam and checkWriteAppAccess prior to routing to
supabaseTusHeadHandler (either move the early HEAD check after those middleware
or invoke those checks inside the early branch) so that supabaseTusHeadHandler
is only called when the app key is validated.
🧹 Nitpick comments (2)
supabase/functions/_backend/files/supabaseTusProxy.ts (1)
209-265: Align handler context types with MiddlewareKeyVariables.Backend handlers should use
Context<MiddlewareKeyVariables>for consistent typing ofc.get(...)and to match backend conventions. As per coding guidelines, ...♻️ Proposed update
-import type { Context } from 'hono' +import type { Context } from 'hono' +import type { MiddlewareKeyVariables } from '../utils/hono.ts' -export async function supabaseTusCreateHandler(c: Context): Promise<Response> { +export async function supabaseTusCreateHandler(c: Context<MiddlewareKeyVariables>): Promise<Response> { -export async function supabaseTusPatchHandler(c: Context): Promise<Response> { +export async function supabaseTusPatchHandler(c: Context<MiddlewareKeyVariables>): Promise<Response> { -export async function supabaseTusHeadHandler(c: Context): Promise<Response> { +export async function supabaseTusHeadHandler(c: Context<MiddlewareKeyVariables>): Promise<Response> {Also applies to: 270-299, 304-332
tests/tus-upload.test.ts (1)
99-327: Considerit.concurrent()with per‑test isolation for parallel‑friendly runs.Guidelines call for concurrent tests where possible. These cases are mostly independent, but they share a single
APPNAME. If you enable concurrency, generate per‑test app IDs (or per‑test file prefixes) and seed per test to avoid cross‑test coupling. As per coding guidelines, ...
333f666 to
68d4eab
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@supabase/functions/_backend/files/supabaseTusProxy.ts`:
- Around line 209-265: Change the handler signatures to accept a typed context
and read auth/apikey from it: update supabaseTusCreateHandler (and likewise
supabaseTusPatchHandler and supabaseTusHeadHandler) to use
Context<MiddlewareKeyVariables> instead of Context, import
MiddlewareKeyVariables, and replace any implicit assumptions about
request-scoped auth with explicit c.get('apikey') and c.get('auth') lookups (use
these values when building headers via buildSupabaseAuthHeaders or anywhere auth
is forwarded); ensure any types for c.get returns are null-checked before use
and update forwardHeaders/transformMetadataForSupabase calls if they need the
apikey/auth values.
🧹 Nitpick comments (4)
tests/tus-upload.test.ts (2)
46-60: Avoid double type-assertion for the PATCH body.
Theas unknown as BodyInitbypasses type safety. Using anArrayBufferslice keeps correct typing and avoids hidden incompatibilities in runtimes/typings.Please verify that the target runtime’s Fetch `BodyInit` accepts `ArrayBuffer` as expected.♻️ Suggested change
- body: data as unknown as BodyInit, + body: data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength),
99-110: Preferit.concurrent()for parallelized test execution.
These tests run in parallel across files; within-file concurrency helps and is expected by repo guidelines. Applyit.concurrentacross this suite and ensure unique filenames per test remain intact. As per coding guidelines, ...♻️ Example pattern (apply throughout)
- it('should return TUSUpload: true for self-hosted', async () => { + it.concurrent('should return TUSUpload: true for self-hosted', async () => {supabase/functions/_backend/files/supabaseTusProxy.ts (2)
134-140: Consider addingapikeyheader (and guarding missing service key).
Supabase Storage commonly expectsapikeyalongsideAuthorization. Also consider failing fast ifSUPABASE_SERVICE_ROLE_KEYis missing to avoid sendingBearer undefined.Please confirm required auth headers in Supabase Storage TUS docs for your target version.♻️ Suggested change
function buildSupabaseAuthHeaders(c: Context): Headers { const serviceRoleKey = getEnv(c, 'SUPABASE_SERVICE_ROLE_KEY') const headers = new Headers() headers.set('Authorization', `Bearer ${serviceRoleKey}`) + headers.set('apikey', serviceRoleKey) headers.set('Tus-Resumable', TUS_VERSION) return headers }
167-204: VerifyAbortSignal.timeoutsupport in target runtimes.
If the Supabase Edge/Workers runtime lacks this API, the proxy will throw before fetching. Consider falling back toAbortController+setTimeout.♻️ Fallback pattern
- return await fetch(url, { - ...options, - signal: AbortSignal.timeout(SUPABASE_TIMEOUT), - }) + const controller = new AbortController() + const timer = setTimeout(() => controller.abort(), SUPABASE_TIMEOUT) + try { + return await fetch(url, { ...options, signal: controller.signal }) + } + finally { + clearTimeout(timer) + }
| export async function supabaseTusCreateHandler(c: Context): Promise<Response> { | ||
| const requestId = c.get('requestId') | ||
| const rawFileId = c.get('fileId') | ||
|
|
||
| if (typeof rawFileId !== 'string' || rawFileId.length === 0) { | ||
| cloudlog({ requestId, message: 'supabaseTusCreateHandler missing or invalid fileId', fileId: rawFileId }) | ||
| return new Response(JSON.stringify({ error: 'internal_error', message: 'Internal server error: missing fileId' }), { | ||
| status: 500, | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| }) | ||
| } | ||
|
|
||
| cloudlog({ requestId, message: 'supabaseTusCreateHandler', fileId: rawFileId }) | ||
|
|
||
| const supabaseUrl = buildSupabaseTusUrl(c) | ||
| const headers = buildSupabaseAuthHeaders(c) | ||
| forwardHeaders(c, headers, ['Upload-Length', 'Upload-Defer-Length', 'Content-Type', 'Content-Length', 'X-Upsert']) | ||
|
|
||
| const transformedMetadata = transformMetadataForSupabase(c, rawFileId) | ||
| headers.set('Upload-Metadata', transformedMetadata) | ||
|
|
||
| cloudlog({ requestId, message: 'supabaseTusCreateHandler forwarding', supabaseUrl, transformedMetadata }) | ||
|
|
||
| const result = await proxyToSupabase(requestId, 'supabaseTusCreateHandler', supabaseUrl, { | ||
| method: 'POST', | ||
| headers, | ||
| body: c.req.raw.body, | ||
| }) | ||
|
|
||
| if ('error' in result) | ||
| return result.response | ||
|
|
||
| const response = result | ||
| cloudlog({ requestId, message: 'supabaseTusCreateHandler response', status: response.status }) | ||
|
|
||
| const responseHeaders = buildTusResponseHeaders() | ||
| copyResponseHeaders(response.headers, responseHeaders, ['Upload-Offset', 'Upload-Expires']) | ||
|
|
||
| const location = response.headers.get('Location') | ||
| if (location) { | ||
| const rewrittenLocation = rewriteLocationHeader(c, location) | ||
| responseHeaders.set('Location', rewrittenLocation) | ||
| cloudlog({ requestId, message: 'supabaseTusCreateHandler location rewritten', original: location, rewritten: rewrittenLocation }) | ||
| } | ||
|
|
||
| let responseBody: BodyInit | null = null | ||
| if (response.status >= 400) { | ||
| responseBody = await readErrorBody(response) | ||
| if (responseBody) { | ||
| const contentType = response.headers.get('Content-Type') | ||
| if (contentType) | ||
| responseHeaders.set('Content-Type', contentType) | ||
| } | ||
| } | ||
|
|
||
| return new Response(responseBody, { status: response.status, headers: responseHeaders }) | ||
| } |
There was a problem hiding this comment.
Use Context<MiddlewareKeyVariables> and pull request context fields.
Handlers should accept the typed context and access c.get('apikey') / c.get('auth') per backend conventions. Apply the same change to the PATCH and HEAD handlers too. As per coding guidelines, ...
🐛 Suggested change (pattern)
-import type { Context } from 'hono'
+import type { Context } from 'hono'
+import type { MiddlewareKeyVariables } from '../utils/hono.ts'
+
+type AppContext = Context<MiddlewareKeyVariables>
-export async function supabaseTusCreateHandler(c: Context): Promise<Response> {
+export async function supabaseTusCreateHandler(c: AppContext): Promise<Response> {
const requestId = c.get('requestId')
+ const apikey = c.get('apikey')
+ const auth = c.get('auth')
const rawFileId = c.get('fileId')
if (typeof rawFileId !== 'string' || rawFileId.length === 0) {
cloudlog({ requestId, message: 'supabaseTusCreateHandler missing or invalid fileId', fileId: rawFileId })
return new Response(JSON.stringify({ error: 'internal_error', message: 'Internal server error: missing fileId' }), {
status: 500,
headers: { 'Content-Type': 'application/json' },
})
}
- cloudlog({ requestId, message: 'supabaseTusCreateHandler', fileId: rawFileId })
+ cloudlog({
+ requestId,
+ message: 'supabaseTusCreateHandler',
+ fileId: rawFileId,
+ hasApikey: Boolean(apikey),
+ authType: auth?.authType,
+ })🤖 Prompt for AI Agents
In `@supabase/functions/_backend/files/supabaseTusProxy.ts` around lines 209 -
265, Change the handler signatures to accept a typed context and read
auth/apikey from it: update supabaseTusCreateHandler (and likewise
supabaseTusPatchHandler and supabaseTusHeadHandler) to use
Context<MiddlewareKeyVariables> instead of Context, import
MiddlewareKeyVariables, and replace any implicit assumptions about
request-scoped auth with explicit c.get('apikey') and c.get('auth') lookups (use
these values when building headers via buildSupabaseAuthHeaders or anywhere auth
is forwarded); ensure any types for c.get returns are null-checked before use
and update forwardHeaders/transformMetadataForSupabase calls if they need the
apikey/auth values.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@supabase/functions/_backend/files/files.ts`:
- Around line 283-355: The decoded-TUS-ID logic in setKeyFromIdParam assumes
standard base64 (uses atob on normalizedFileId) but Supabase TUS IDs are
base64url (URL-safe chars and no padding); update the comment to say "base64url
encoded" and implement robust decoding that tries standard atob first and, on
failure or when input contains '-' or '_', converts base64url to standard base64
by replacing '-'->'+' and '_'->'/', adds '=' padding to length%4, then decodes;
keep all existing logging (cloudlog) and ensure extractedFileId is set and
c.set('fileId', extractedFileId) as before.
| const normalizedFileId = decodeURIComponent(fileId) | ||
|
|
||
| // Check if this is a Supabase TUS upload ID (base64 encoded) | ||
| // TUS upload IDs from Supabase are base64-encoded paths like: capgo/orgs/xxx/apps/yyy/file.zip/uuid | ||
| let extractedFileId = normalizedFileId | ||
| try { | ||
| const decoded = atob(normalizedFileId) | ||
| // If decoded starts with bucket name and contains orgs/, it's a TUS upload ID | ||
| if (decoded.startsWith('capgo/') && decoded.includes('/orgs/')) { | ||
| const parts = decoded.split('/') | ||
| // Expected format: | ||
| // [0]: 'capgo' | ||
| // [1]: 'orgs' | ||
| // [2]: orgId | ||
| // [3]: 'apps' | ||
| // [4]: appId | ||
| // [5..n-2]: file path segments | ||
| // [n-1]: UUID | ||
| if ( | ||
| parts.length >= 6 | ||
| && parts[0] === 'capgo' | ||
| && parts[1] === 'orgs' | ||
| && parts[3] === 'apps' | ||
| ) { | ||
| cloudlog({ | ||
| requestId: c.get('requestId'), | ||
| message: 'setKeyFromIdParam - detected Supabase TUS upload ID', | ||
| decoded, | ||
| parts, | ||
| }) | ||
| // Extract file path: remove bucket prefix (capgo/) and UUID suffix | ||
| // Resulting path starts with "orgs/..." | ||
| const pathParts = parts.slice(1, parts.length - 1) | ||
| if (pathParts.length > 0) { | ||
| extractedFileId = pathParts.join('/') | ||
| cloudlog({ | ||
| requestId: c.get('requestId'), | ||
| message: 'setKeyFromIdParam - extracted fileId from TUS ID', | ||
| extractedFileId, | ||
| originalParts: parts, | ||
| pathParts, | ||
| }) | ||
| } | ||
| else { | ||
| cloudlog({ | ||
| requestId: c.get('requestId'), | ||
| message: 'setKeyFromIdParam - TUS ID decoded but pathParts is empty, using normalizedFileId as fileId', | ||
| decoded, | ||
| parts, | ||
| }) | ||
| } | ||
| } | ||
| else { | ||
| cloudlog({ | ||
| requestId: c.get('requestId'), | ||
| message: 'setKeyFromIdParam - decoded TUS ID has unexpected structure, using normalizedFileId as fileId', | ||
| decoded, | ||
| parts, | ||
| }) | ||
| } | ||
| } | ||
| } | ||
| catch { | ||
| // Not a base64 string, use as-is | ||
| } | ||
|
|
||
| cloudlog({ | ||
| requestId: c.get('requestId'), | ||
| message: 'setKeyFromIdParam - after decodeURIComponent', | ||
| message: 'setKeyFromIdParam - final fileId', | ||
| originalFileId: fileId, | ||
| normalizedFileId, | ||
| extractedFileId, | ||
| }) | ||
| c.set('fileId', normalizedFileId) | ||
| c.set('fileId', extractedFileId) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Supabase Storage TUS upload ID encoding format base64url
💡 Result:
Supabase Storage’s TUS “upload ID” is not JSON—it’s a base64url-encoded path string.
Decoded (plain) format:
/<tenant-id>/<bucket-name>/<object-name>/<version-uuid>
Then that whole string is encoded using base64url (URL-safe base64, typically no = padding) and placed as the last path segment of the resumable upload URL. [1]
You can see real-world IDs that match base64url characteristics (URL-safe alphabet, no padding) in Supabase’s own issue/discussion URLs (the long final segment after /upload/resumable/…). [2]
Sources
- [1] (deepwiki.com)
- [2] (github.com)
Citations:
- 1: https://deepwiki.com/supabase/storage/5.3-tus-protocol
- 2: https://github.com/orgs/supabase/discussions/29676?utm_source=openai
🏁 Script executed:
# Search the codebase for any references to TUS encoding or base64 handling
rg -i "tus|base64" supabase/functions/_backend --type ts -B 2 -A 2Repository: Cap-go/capgo
Length of output: 50369
🏁 Script executed:
# Look for any tests or examples related to TUS upload IDs
fd -t f -e ts -e js . supabase/functions/_backend | xargs rg -l "TUS\|tus" 2>/dev/null | head -5Repository: Cap-go/capgo
Length of output: 38
🏁 Script executed:
# Check the files.ts file for context around TUS handling
wc -l supabase/functions/_backend/files/files.tsRepository: Cap-go/capgo
Length of output: 102
🏁 Script executed:
sed -n '270,360p' supabase/functions/_backend/files/files.tsRepository: Cap-go/capgo
Length of output: 2855
Handle base64url-encoded TUS upload IDs—current code only supports standard base64.
Supabase TUS upload IDs are base64url-encoded (not standard base64), using URL-safe characters (-, _) and omitting padding. The current code uses atob(), which fails on base64url input, silently leaving extractedFileId encoded. This breaks downstream access checks and resumable upload flows.
Update the comment from "base64 encoded" to "base64url encoded" for accuracy, and add support for both standard base64 and base64url decoding with format detection.
🛠️ Proposed fix (base64 + base64url support)
- let extractedFileId = normalizedFileId
- try {
- const decoded = atob(normalizedFileId)
- // If decoded starts with bucket name and contains orgs/, it's a TUS upload ID
- if (decoded.startsWith('capgo/') && decoded.includes('/orgs/')) {
+ const looksLikeBase64 = /^[A-Z0-9+/]+=*$/i.test(normalizedFileId)
+ const looksLikeBase64Url = /^[A-Z0-9_-]+=*$/i.test(normalizedFileId)
+
+ const decodeBase64Url = (value: string) => {
+ const padded = value
+ .replace(/-/g, '+')
+ .replace(/_/g, '/')
+ .padEnd(Math.ceil(value.length / 4) * 4, '=')
+ return atob(padded)
+ }
+
+ let extractedFileId = normalizedFileId
+ try {
+ const decoded = looksLikeBase64
+ ? atob(normalizedFileId)
+ : looksLikeBase64Url
+ ? decodeBase64Url(normalizedFileId)
+ : null
+ if (decoded && decoded.startsWith('capgo/') && decoded.includes('/orgs/')) {
const parts = decoded.split('/')
// Expected format:
// [0]: 'capgo'
// [1]: 'orgs'
// [2]: orgId
@@
- catch {
+ catch (decodeError) {
+ cloudlog({
+ requestId: c.get('requestId'),
+ message: 'setKeyFromIdParam - base64 decode failed',
+ error: decodeError instanceof Error ? decodeError.message : String(decodeError),
+ })
// Not a base64 string, use as-is
}🤖 Prompt for AI Agents
In `@supabase/functions/_backend/files/files.ts` around lines 283 - 355, The
decoded-TUS-ID logic in setKeyFromIdParam assumes standard base64 (uses atob on
normalizedFileId) but Supabase TUS IDs are base64url (URL-safe chars and no
padding); update the comment to say "base64url encoded" and implement robust
decoding that tries standard atob first and, on failure or when input contains
'-' or '_', converts base64url to standard base64 by replacing '-'->'+' and
'_'->'/', adds '=' padding to length%4, then decodes; keep all existing logging
(cloudlog) and ensure extractedFileId is set and c.set('fileId',
extractedFileId) as before.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/cli-sdk-utils.ts (1)
137-138: Outdated JSDoc comment.The function docstring says "uses zip format" but the code now defaults to TUS resumable upload protocol. Update the comment to reflect the current behavior.
📝 Suggested fix
/** * Upload a bundle using the SDK with test-specific defaults - * Provides: auto path calculation, disables code checks, uses zip format + * Provides: auto path calculation, disables code checks, uses TUS resumable upload */
🧹 Nitpick comments (1)
tests/cli-sdk-utils.ts (1)
152-155: Minor comment clarity.The comment says "useZip: false uses TUS" but
useZipis actually omitted (not set tofalse). Consider clarifying:📝 Suggested fix
const options: UploadOptions = { appId, path: join(tempFileFolder(appId), 'dist'), bundle: version, channel, disableCodeCheck: true, // Skip notifyAppReady check for tests - // useZip: false uses TUS resumable upload protocol + // Omitting useZip defaults to TUS resumable upload protocol ...additionalOptions, }
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/cli-sdk-utils.ts`:
- Around line 39-40: Replace the hardcoded SUPABASE_URL usage for localApiFiles
with the test routing helper: call getEndpointUrl('/functions/v1/files') and
assign its result to localApiFiles instead of
`${SUPABASE_URL}/functions/v1/files`; update any import or scope to ensure
getEndpointUrl is available where localApiFiles is defined so tests (including
CF Workers mode) use the routing helper.
| // Point TUS uploads to local Supabase instance for testing | ||
| localApiFiles: `${SUPABASE_URL}/functions/v1/files`, |
There was a problem hiding this comment.
Use getEndpointUrl() for test routing instead of raw SUPABASE_URL.
Directly composing localApiFiles from SUPABASE_URL bypasses the test routing helper and can break CF Workers mode. Please use getEndpointUrl(path) here and adjust the path to match that helper’s contract. As per coding guidelines, backend tests must use the helper for correct routing.
🔧 Proposed update
-import { APIKEY_TEST_ALL } from './test-utils'
+import { APIKEY_TEST_ALL, getEndpointUrl } from './test-utils'
@@
- localApiFiles: `${SUPABASE_URL}/functions/v1/files`,
+ localApiFiles: getEndpointUrl('/files'),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Point TUS uploads to local Supabase instance for testing | |
| localApiFiles: `${SUPABASE_URL}/functions/v1/files`, | |
| import { APIKEY_TEST_ALL, getEndpointUrl } from './test-utils' | |
| ... | |
| // Point TUS uploads to local Supabase instance for testing | |
| localApiFiles: getEndpointUrl('/files'), |
🤖 Prompt for AI Agents
In `@tests/cli-sdk-utils.ts` around lines 39 - 40, Replace the hardcoded
SUPABASE_URL usage for localApiFiles with the test routing helper: call
getEndpointUrl('/functions/v1/files') and assign its result to localApiFiles
instead of `${SUPABASE_URL}/functions/v1/files`; update any import or scope to
ensure getEndpointUrl is available where localApiFiles is defined so tests
(including CF Workers mode) use the routing helper.
…osted Add Supabase TUS proxy handlers to support resumable uploads on self-hosted deployments. Implements POST/PATCH/HEAD handlers that proxy to Supabase Storage's TUS endpoint, including proper URL rewriting and metadata transformation for both local and production environments. Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Add timeout and error handling to all TUS proxy handlers (5min timeout) - Add uploadId extraction validation with robust URL parsing - Add fileId validation in supabaseTusCreateHandler - Use UTF-8-safe base64 encoding (TextEncoder) for Unicode support - Preserve filetype metadata when proxying to Supabase Storage - Forward Supabase error bodies for better debugging - Add early TUS HEAD detection to avoid unnecessary middleware overhead - Improve TUS upload ID parsing with structure validation - Remove unused tusUploadId context variable - Add warning log when using SUPABASE_URL as fallback for external URL - Clarify comment about TUS support being for Edge Functions runtime - Fix test file: remove unused vars, use Blob for body, lowercase describe blocks, type cast JSON response Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix TypeScript error: use data.buffer instead of new Blob([data]) for fetch body - Refactor TUS proxy handlers to reduce code duplication (addresses SonarCloud 11.2% duplication): - Extract buildSupabaseAuthHeaders() for common auth header setup - Extract forwardHeaders() for forwarding client headers to Supabase - Extract copyResponseHeaders() for copying Supabase response headers - Extract proxyToSupabase() for common request handling with timeout/error - Extract readErrorBody() for reading error responses Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Cloudflare Workers types don't include Uint8Array in BodyInit. Use type assertion since Uint8Array is a valid fetch body at runtime. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move TUS HEAD detection after setKeyFromIdParam and checkWriteAppAccess middleware to prevent unauthorized probing of upload progress. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove useZip: true to make CLI tests use TUS resumable upload protocol instead of legacy zip upload, testing the new TUS proxy implementation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The CLI SDK has a hardcoded TUS endpoint (files.capgo.app) that doesn't respect supaHost. Until CLI SDK supports custom TUS endpoints, tests must use legacy zip upload. The server-side TUS proxy works correctly - this is a CLI SDK limitation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Set localApiFiles in capacitor.config.json to point TUS uploads to the local Supabase instance. This enables CLI tests to use TUS protocol with the new TUS proxy implementation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
c8e4d91 to
5d712e2
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/cli-utils.ts`:
- Around line 144-145: The inline comment above the writeFile call is incorrect:
it says "Create a minimal package.json" but the code writes
'.package-lock.json'; update either the comment or the written filename to match
intent. Locate the writeFile invocation that writes join(nodeModulesPath,
'.package-lock.json') and change the comment to reference ".package-lock.json"
(or change the filename to 'package.json' if the sentinel should be
package.json) so the comment and the writeFile call are consistent.
| // Create a minimal package.json in node_modules to indicate it's "installed" | ||
| await writeFile(join(nodeModulesPath, '.package-lock.json'), '{"name": "temp", "lockfileVersion": 1}') |
There was a problem hiding this comment.
Comment/file mismatch for the install sentinel.
The comment mentions creating a package.json, but the code writes .package-lock.json. Please align the comment (or the file) to avoid confusion.
✏️ Suggested comment fix
- // Create a minimal package.json in node_modules to indicate it's "installed"
+ // Create a minimal .package-lock.json in node_modules to indicate it's "installed"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Create a minimal package.json in node_modules to indicate it's "installed" | |
| await writeFile(join(nodeModulesPath, '.package-lock.json'), '{"name": "temp", "lockfileVersion": 1}') | |
| // Create a minimal .package-lock.json in node_modules to indicate it's "installed" | |
| await writeFile(join(nodeModulesPath, '.package-lock.json'), '{"name": "temp", "lockfileVersion": 1}') |
🤖 Prompt for AI Agents
In `@tests/cli-utils.ts` around lines 144 - 145, The inline comment above the
writeFile call is incorrect: it says "Create a minimal package.json" but the
code writes '.package-lock.json'; update either the comment or the written
filename to match intent. Locate the writeFile invocation that writes
join(nodeModulesPath, '.package-lock.json') and change the comment to reference
".package-lock.json" (or change the filename to 'package.json' if the sentinel
should be package.json) so the comment and the writeFile call are consistent.
|
* feat: enable TUS upload support via Supabase Storage proxy for self-hosted Add Supabase TUS proxy handlers to support resumable uploads on self-hosted deployments. Implements POST/PATCH/HEAD handlers that proxy to Supabase Storage's TUS endpoint, including proper URL rewriting and metadata transformation for both local and production environments. Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com> * fix: address PR review comments for TUS upload support - Add timeout and error handling to all TUS proxy handlers (5min timeout) - Add uploadId extraction validation with robust URL parsing - Add fileId validation in supabaseTusCreateHandler - Use UTF-8-safe base64 encoding (TextEncoder) for Unicode support - Preserve filetype metadata when proxying to Supabase Storage - Forward Supabase error bodies for better debugging - Add early TUS HEAD detection to avoid unnecessary middleware overhead - Improve TUS upload ID parsing with structure validation - Remove unused tusUploadId context variable - Add warning log when using SUPABASE_URL as fallback for external URL - Clarify comment about TUS support being for Edge Functions runtime - Fix test file: remove unused vars, use Blob for body, lowercase describe blocks, type cast JSON response Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address additional PR review comments - Fix TypeScript error: use data.buffer instead of new Blob([data]) for fetch body - Refactor TUS proxy handlers to reduce code duplication (addresses SonarCloud 11.2% duplication): - Extract buildSupabaseAuthHeaders() for common auth header setup - Extract forwardHeaders() for forwarding client headers to Supabase - Extract copyResponseHeaders() for copying Supabase response headers - Extract proxyToSupabase() for common request handling with timeout/error - Extract readErrorBody() for reading error responses Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: use type assertion for Uint8Array body in TUS tests Cloudflare Workers types don't include Uint8Array in BodyInit. Use type assertion since Uint8Array is a valid fetch body at runtime. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: ensure HEAD TUS requests enforce app-level access checks Move TUS HEAD detection after setKeyFromIdParam and checkWriteAppAccess middleware to prevent unauthorized probing of upload progress. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: enable TUS upload in CLI SDK tests Remove useZip: true to make CLI tests use TUS resumable upload protocol instead of legacy zip upload, testing the new TUS proxy implementation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: re-enable useZip for CLI tests (TUS needs CLI SDK update) The CLI SDK has a hardcoded TUS endpoint (files.capgo.app) that doesn't respect supaHost. Until CLI SDK supports custom TUS endpoints, tests must use legacy zip upload. The server-side TUS proxy works correctly - this is a CLI SDK limitation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: configure localApiFiles for CLI TUS uploads Set localApiFiles in capacitor.config.json to point TUS uploads to the local Supabase instance. This enables CLI tests to use TUS protocol with the new TUS proxy implementation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix(backend): handle delete_failed_version already-deleted * fix(tests): force SDK uploads to use local config * fix(tests): link temp apps to root node_modules --------- Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>



Summary
Enable TUS (resumable upload) support for self-hosted Capgo deployments by proxying to Supabase Storage's TUS endpoint. Implements POST/PATCH/HEAD handlers with proper URL rewriting and metadata transformation for both local development and production environments.
Test plan
Run the TUS upload test suite:
bun run test tus-upload.test.tsTests cover:
Checklist
Summary by CodeRabbit
New Features
Chores
Tests
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.