-
Notifications
You must be signed in to change notification settings - Fork 49
NFSv4.2 is broken across different hosts #1565
Copy link
Copy link
Closed
Labels
kind/bugSomething isn't workingSomething isn't working
Description
Description
With flatcar 3975.2.1 we see very weird behavior of NFS 4.2 where one pod writes a file but a pod on a different host is unable to see the just written file content.
NFS 3 / 4.1 works as expected. (Haven't tested 4.0). Flatcar 3815.2.5 is also unaffected.
Impact
NFS 4.2 mount is unusable.
Environment and steps to reproduce
- Set-up:
- at least two nodes in a k8s cluster running flatcar 3975.2.1
- Setup nfs-ganesh:
helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/
helm install my-release nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner
Update mount options in `StorageClass` `nfs`
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
meta.helm.sh/release-name: my-release
meta.helm.sh/release-namespace: default
labels:
app: nfs-server-provisioner
app.kubernetes.io/managed-by: Helm
chart: nfs-server-provisioner-1.8.0
heritage: Helm
release: my-release
name: nfs
mountOptions:
- hard
- retrans=3
- proto=tcp
- nfsvers=4.2
- rsize=4096
- wsize=4096
- noatime
- nodiratime
provisioner: cluster.local/my-release-nfs-server-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
create pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-dynamic-volume-claim
spec:
storageClassName: "nfs"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
create pods (must be executed on different hosts)
apiVersion: v1
kind: Pod
metadata:
name: test-pod-1
labels:
app: nginx
spec:
containers:
- name: test
image: nginx
volumeMounts:
- name: config
mountPath: /test
volumes:
- name: config
persistentVolumeClaim:
claimName: test-dynamic-volume-claim
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: nginx
topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Pod
metadata:
name: test-pod-2
labels:
app: nginx
spec:
containers:
- name: test
image: nginx
volumeMounts:
- name: config
mountPath: /test
volumes:
- name: config
persistentVolumeClaim:
claimName: test-dynamic-volume-claim
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: nginx
topologyKey: "kubernetes.io/hostname"
- Action(s):
a.kubectl exec -it test-pod-1 -- bash -c 'echo "def" > /test/testfile'
b.kubectl exec -it test-pod-2 -- bash -c 'cat /test/testfile' - Error: The call to
catshould return "def", but returns nothing. Note that both pods see accurate metadata (usingls -la /test) for the file
Expected behavior
cat from test-pod-2 should be able to read the just written file content. Note that test-pod-1 is able to read the file contents.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
kind/bugSomething isn't workingSomething isn't working