-
Notifications
You must be signed in to change notification settings - Fork 270
Closed
Description
Describe the bug
It looks like we may have a hard-coded assumption in shuffle where we cannot process an incoming batch that is larger than the default batch size.
I have seen this error more than once but currently have a repro so will document this more as I investigate.
org.apache.comet.CometNativeException: range end index 8221 out of range for slice of length 8192
...
at org.apache.comet.Native.executePlan(Native Method)
at org.apache.comet.CometExecIterator.executeNative(CometExecIterator.scala:72)
at org.apache.comet.CometExecIterator.getNextBatch(CometExecIterator.scala:129)
at org.apache.comet.CometExecIterator.hasNext(CometExecIterator.scala:144)
at org.apache.spark.sql.comet.execution.shuffle.CometShuffleWriteProcessor.write(CometShuffleExchangeExec.scala:492)
at org.apache.spark.sql.comet.shims.ShimCometShuffleWriteProcessor.write(ShimCometShuffleWriteProcessor.scala:35)
at org.apache.spark.sql.comet.shims.ShimCometShuffleWriteProcessor.write$(ShimCometShuffleWriteProcessor.scala:28)
at org.apache.spark.sql.comet.execution.shuffle.CometShuffleWriteProcessor.write(CometShuffleExchangeExec.scala:448)
Steps to reproduce
No response
Expected behavior
No response
Additional context
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working