Skip to content
This repository was archived by the owner on Jul 19, 2025. It is now read-only.
This repository was archived by the owner on Jul 19, 2025. It is now read-only.

Watching for more than 3 pods doesn't work #454

@ghost

Description

Hello.
I found some correlation between pods count and ksync watch status.

Here is a my config:

apikey: ksync
context: "mycontext"
daemonset-namespace: kube-system
docker-root: /var/lib/docker
docker-socket: /var/run/docker.sock
log-level: info
namespace: pr-22572
output: pretty
port: 40322
syncthing-port: 8384

spec:
- name: monolith
  containername: ""
  pod: ""
  selector:
  - app=pr-22572-monolith-monolith
  - ksync-role=web-shared
  namespace: pr-22572
  localpath: mydir/monolith
  remotepath: /project
  reload: true
  localreadonly: false
  remotereadonly: false

When I have 4 pods in deployment, 3 of them are in sync, but one is not:

### Ksync status: ###
    NAME      LOCAL      REMOTE     STATUS                            POD                           CONTAINER
-----------+----------+----------+----------+-----------------------------------------------------+------------
  monolith   monolith   /project
                                   watching   pr-22572-monolith-monolith-web-7df599c968-jllbc
                                   watching   pr-22572-monolith-monolith-web-69bd84785d-lz6zl
                                   watching   pr-22572-monolith-monolith-web-7df599c968-xjgpc
                                   starting   pr-22572-monolith-monolith-web-69bd84785d-bxfmz

### Kubernetes pods status: ###
NAME                                                  READY   STATUS    RESTARTS   AGE
pr-22572-monolith-monolith-web-7df599c968-jllbc       1/1     Running   0          31m
pr-22572-monolith-monolith-web-7df599c968-xjgpc       1/1     Running   0          5m51s
pr-22572-monolith-monolith-web-69bd84785d-bxfmz    1/1     Running   0          31m

If I set 3 pods for deployment, the synchronization works fine, and all 3 pods in watching state.
I tried to reproduce it a few times and always when I have 4 or more pods in deployment, some of them couldn't be in the watching state.

In ksync watch output I can see a lot of:

time="2020-10-02T18:08:32+03:00" level=warning msg="Get \"http://localhost:8384/rest/events?since=11585\": dial tcp [::1]:8384: connect: connection refused"

I tried to find some settings for increasing "parallelization" but didn't find it. Maybe this is some issue with GRPC?

Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions