Skip to content

[BUG] Agent node have no valid KUBECONFIG file #1535

@johgoe

Description

@johgoe

I'm running a cluster with one server and one agent node. My agent containers continiues to log this message to stdout

E1215 11:20:49.464391   25089 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
``
My analysis shows that this message is from kubectl. The reasons is there is no kubeconfig which I can confirm by executing `podman exec -it k3d-kyma-agent-0 kubectl config view` on my agent but there is one on the server node

apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

I thing the log comes from this line https://github.com/arikmaor/k3d/blob/45fa41893039fb218110d3764babe2dd72fb960f/pkg/types/fixes/assets/k3d-entrypoint.sh#L23 from https://github.com/k3d-io/k3d/pull/1119

Maybe it's caused by my podless podman installation. Can you confirm it's working on a normal docker installation? How the is the KUBECONFIG provided to the agent nodes?

## Which version of `k3d`

- v5.7.5

## Which version of docker

rootless podman 
Client:       Podman Engine
Version:      4.9.3
API Version:  4.9.3
Go Version:   go1.22.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions