-
Notifications
You must be signed in to change notification settings - Fork 4k
Closed
Description
What version of gRPC-Java are you using?
1.71.0
What is your environment?
What did you expect to see?
Version 1.70.0
StatusRuntimeException caught: Status{code=UNAVAILABLE, description=io exception, cause=io.grpc.netty.shaded.io.netty.channel.ConnectTimeoutException: connection timed out after 30000 ms: /[2a00:1450:400c:c07:0:0:0:6a]:5999
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:263)
at io.grpc.netty.shaded.io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.grpc.netty.shaded.io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:153)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173)
at io.grpc.netty.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:994)
What did you see instead?
Version 1.71.0
StatusRuntimeException caught: Status{code=INTERNAL, description=Panic! This is a bug!, cause=java.lang.IllegalStateException: Index is past the end of the address group list
at io.grpc.internal.PickFirstLeafLoadBalancer$Index.getCurrentAddress(PickFirstLeafLoadBalancer.java:678)
at io.grpc.internal.PickFirstLeafLoadBalancer.acceptResolvedAddresses(PickFirstLeafLoadBalancer.java:143)
at io.grpc.internal.AutoConfiguredLoadBalancerFactory$AutoConfiguredLoadBalancer.tryAcceptResolvedAddresses(AutoConfiguredLoadBalancerFactory.java:142)
at io.grpc.internal.ManagedChannelImpl$NameResolverListener.onResult2(ManagedChannelImpl.java:1815)
at io.grpc.internal.ManagedChannelImpl$NameResolverListener$1NamesResolved.run(ManagedChannelImpl.java:1674)
at io.grpc.SynchronizationContext.drain(SynchronizationContext.java:96)
at io.grpc.SynchronizationContext.execute(SynchronizationContext.java:128)
at io.grpc.internal.ManagedChannelImpl$NameResolverListener.onResult(ManagedChannelImpl.java:1681)
at io.grpc.internal.RetryingNameResolver$RetryingListener.onResult(RetryingNameResolver.java:99)
at com.grpc.example.FoobarResolver.lambda$start$0(FoobarResolver.java:35)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1583)
Steps to reproduce the bug
https://github.com/tommyulfsparre/pick-first-leaf-repro
Using the experimental PickFirstLeafLoadBalancer with Happy Eyeballs and
returning an IPv6 address from a NameResolver can cause the panic above. This panic is not present in version 1.70.0. These PRs #11849 #11624 might be relevant. At a cursory glance it seems like there might be a race happening with the Happy Eyeballs scheduled connection attempt.
Reactions are currently unavailable