-
-
Notifications
You must be signed in to change notification settings - Fork 757
Closed
Description
Working on dask-kubernetes, I noticed that my pods were succeeding, when they were timing out trying to connect to a scheduler.
In this example, I don't have a scheduler running at this address
bash-5.0$ dask-worker 192.168.7.26:8786 --death-timeout=1
2019-06-18 14:24:19,779 distributed.nanny[41007] INFO Start Nanny at: 'tcp://192.168.7.20:64424'
2019-06-18 14:24:20,790 distributed.nanny[41007] INFO Closing Nanny at 'tcp://192.168.7.20:64424'
2019-06-18 14:24:21,330 distributed.worker[41030] INFO Start worker at: tcp://192.168.7.20:64425
2019-06-18 14:24:21,331 distributed.worker[41030] INFO Listening to: tcp://192.168.7.20:64425
2019-06-18 14:24:21,331 distributed.worker[41030] INFO dashboard at: 192.168.7.20:64426
2019-06-18 14:24:21,332 distributed.worker[41030] INFO Waiting to connect to: tcp://192.168.7.26:8786
2019-06-18 14:24:21,332 distributed.worker[41030] INFO -------------------------------------------------
2019-06-18 14:24:21,333 distributed.worker[41030] INFO Threads: 8
2019-06-18 14:24:21,333 distributed.worker[41030] INFO Memory: 17.18 GB
2019-06-18 14:24:21,334 distributed.worker[41030] INFO Local Directory: /Users/taugspurger/worker-a2hgbeb6
2019-06-18 14:24:21,334 distributed.worker[41030] INFO -------------------------------------------------
2019-06-18 14:24:21,335 distributed.worker[41030] INFO Stopping worker at tcp://192.168.7.20:64425
2019-06-18 14:24:21,379 distributed.dask_worker[41007] INFO End worker
bash-5.0$ echo $?
0
Would changing that to be non-zero break anything else?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels