fix: support parsing extension type arrays like CITEXT#28941
fix: support parsing extension type arrays like CITEXT#28941przpl wants to merge 2 commits intoprisma:mainfrom
Conversation
WalkthroughAdds OID metadata caching and extension-array parsing to the PostgreSQL adapter; threads a shared OidMetadataCache through PgQueryable/PgTransaction/PrismaPgAdapter; and enforces that list-arity fields receive actual arrays in the data mapper (throws on non-array input). Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant PrismaPgAdapter
participant PgQueryable
participant Postgres
participant OidMetadataCache
Client->>PrismaPgAdapter: request query
PrismaPgAdapter->>PgQueryable: execute query (shared OidMetadataCache)
PgQueryable->>Postgres: send SQL, receive rows (may include extension array text)
Postgres-->>PgQueryable: rows (text arrays, extension type OIDs)
PgQueryable->>OidMetadataCache: check for unknown extension OIDs
alt unknown OIDs present
PgQueryable->>Postgres: query pg_type for OIDs
Postgres-->>PgQueryable: pg_type rows (oid, typelem)
PgQueryable->>OidMetadataCache: update cache
end
PgQueryable->>PgQueryable: post-process rows (parseExtensionArray using cache)
PgQueryable-->>PrismaPgAdapter: return processed rows
PrismaPgAdapter-->>Client: return results
Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (1)**/*.{ts,tsx,js,jsx}📄 CodeRabbit inference engine (AGENTS.md)
Files:
🧬 Code graph analysis (1)packages/adapter-pg/src/pg.ts (2)
🔇 Additional comments (8)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Let me know if this is the right direction. If so, I'll check if the |
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
packages/adapter-pg/src/pg.ts(8 hunks)packages/client-engine-runtime/src/interpreter/data-mapper.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx}: Use kebab-case for new file names (e.g.,query-utils.ts,filter-operators.test.ts)
Avoid creating barrel files (index.tsthat re-export from other modules). Import directly from the source file (e.g.,import { foo } from './utils/query-utils'notimport { foo } from './utils'), unless./utils/index.tsfile already exists
Files:
packages/client-engine-runtime/src/interpreter/data-mapper.tspackages/adapter-pg/src/pg.ts
🧬 Code graph analysis (1)
packages/client-engine-runtime/src/interpreter/data-mapper.ts (1)
packages/client-engine-runtime/src/index.ts (1)
DataMapperError(3-3)
🔇 Additional comments (6)
packages/client-engine-runtime/src/interpreter/data-mapper.ts (1)
131-136: LGTM! Good defensive validation.This strict array check ensures that list fields always receive actual arrays, surfacing a clear
DataMapperErrorearly rather than allowing silent failures or confusing runtime errors downstream. The error message is appropriately descriptive for debugging.packages/adapter-pg/src/pg.ts (5)
177-191: Design clarification: Two-pass parsing approach is correct.The
getTypeParserhook handles subsequent queries where OID metadata is already cached, while the post-processing loop (lines 93-100) handles first-encounter cases where the cache wasn't populated yet. This two-pass approach is intentional and correctly addresses the timing constraint.Minor inconsistency: Line 86 uses
metadata.typelem > 0while line 187 usesmetadata?.typelem(truthy). Both work correctly sincetypelemis 0 for non-arrays, but consistent style would improve readability.
213-228: LGTM! Safe recursive performIO call.The catalog query correctly uses parameterized SQL and built-in types, so it won't trigger recursive OID lookups. The race condition documentation is helpful and the behavior (idempotent cache updates) is safe.
272-282: LGTM! Proper cache initialization and sharing.The cache is correctly instantiated once per adapter and shared across all queries and transactions. The memory growth concern is already documented with a note about potential LRU eviction if needed.
304-305: LGTM!Correctly passes the shared cache to transactions, ensuring consistent OID metadata handling across the adapter lifecycle.
46-48: No action needed on parseExtensionArray implementation.The function correctly handles CITEXT arrays, which use text-based elements. PostgreSQL extensions that support array syntax are limited to text-based types like CITEXT, where
string[]is the appropriate return type. The implementation is correct for current use cases.
|
Since the underlying issue is blocking us from migrating to Prisma 7, just checking in here to see if anyone's looking into this from the Prisma side. Thx in advance for any update! |
Fixes #28349
Problem Fixed
PostgreSQL extension types (like CITEXT arrays) couldn't be parsed correctly because they have dynamic OIDs that aren't known upfront. When querying extension type arrays, they were returned as unparsed strings instead of proper JavaScript arrays.
Solution:
Summary by CodeRabbit
New Features
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.