exec: check for unsupported by ArrowBatchConverter type#38887
exec: check for unsupported by ArrowBatchConverter type#38887craig[bot] merged 1 commit intocockroachdb:masterfrom
Conversation
c66a92b to
a00aa8d
Compare
asubiotto
left a comment
There was a problem hiding this comment.
Reviewed 5 of 5 files at r1.
Reviewable status:complete! 1 of 0 LGTMs obtained (waiting on @yuzefovich)
pkg/sql/exec/colserde/arrowbatchconverter.go, line 87 at r1 (raw file):
func isSupported(t types.T) bool { switch t { case types.Bool, types.Bytes, types.Int8, types.Int16, types.Int32,
One other thing you could do is create a map of supported types to just check for existence but up to you
pkg/sql/exec/colserde/arrowbatchconverter.go, line 91 at r1 (raw file):
return true default: return false
nit: I think you can do this without a default case
pkg/sql/exec/colserde/arrowbatchconverter_test.go, line 101 at r1 (raw file):
typs, b := randomBatch() c, _ := NewArrowBatchConverter(typs)
I would still add a require.NoError(t, err) everywhere we call NewArrowBatchConverter
Previously, we would create an arrow batch converter without paying attention to whether we supported the types which could lead to a panic during actual conversion. Now we do this check upfront so that we can fall back to DistSQL. Release note: None
a00aa8d to
7ddf6a2
Compare
yuzefovich
left a comment
There was a problem hiding this comment.
Reviewable status:
complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @asubiotto)
pkg/sql/exec/colserde/arrowbatchconverter.go, line 87 at r1 (raw file):
Previously, asubiotto (Alfonso Subiotto Marqués) wrote…
One other thing you could do is create a map of supported types to just check for existence but up to you
I don't have a preference, so I went with your suggestion.
pkg/sql/exec/colserde/arrowbatchconverter_test.go, line 101 at r1 (raw file):
Previously, asubiotto (Alfonso Subiotto Marqués) wrote…
I would still add a
require.NoError(t, err)everywhere we callNewArrowBatchConverter
Done.
|
TFTR! bors r+ |
38837: distsqlrun: make joinreader use limited batches r=jordanlewis a=jordanlewis Closes #35950. Previously, lookup join put no limit on the number of returned rows from each of its lookups. This could be hugely problematic. Imagine a circumstance in which a lookup join was planned on a pair of tables that had 1 billion match rows on the right side for every row on the left side: in this case, the lookup join would ask the KV backend to return a single giant buffer with all 1 billion rows. Also, joinReader's tracing span collection was slightly broken - fix that while we're at it, to make sure we can test this change properly. Release note (bug fix): prevent OOM conditions during lookup joins between tables with a very large n:1 relationship. 38887: exec: check for unsupported by ArrowBatchConverter type r=yuzefovich a=yuzefovich Previously, we would create an arrow batch converter without paying attention to whether we supported the types which could lead to a panic during actual conversion. Now we do this check upfront so that we can fall back to DistSQL. Release note: None Co-authored-by: Jordan Lewis <jordanthelewis@gmail.com> Co-authored-by: Yahor Yuzefovich <yahor@cockroachlabs.com>
Build succeeded |
Previously, we would create an arrow batch converter without
paying attention to whether we supported the types which could
lead to a panic during actual conversion. Now we do this check
upfront so that we can fall back to DistSQL.
Release note: None