Skip to content

Fix serviceAccountName re-creating (wiping) existing DB#2220

Merged
crivetimihai merged 1 commit intomainfrom
update_deployment_postgres
Jan 21, 2026
Merged

Fix serviceAccountName re-creating (wiping) existing DB#2220
crivetimihai merged 1 commit intomainfrom
update_deployment_postgres

Conversation

@ChrisPC-39
Copy link
Copy Markdown
Collaborator

🐛 Bug-fix PR

📌 Summary

Fix critical data loss issue in Helm chart caused by unconditional serviceAccountName addition to postgres deployment. The change in commit 5b7888d9 triggers postgres pod recreation.

🔁 Reproduction Steps

  1. Deploy mcp-stack chart
  2. Populate database with data
  3. Upgrade to chart version containing commit 5b7888d or later (after Jan 14, 2026)
  4. Observe postgres pod recreation due to pod template spec change
  5. Result: All database data is lost due to new db being triggered

🐞 Root Cause

PR #1718 added serviceAccountName: {{ include "mcp-stack.serviceAccountName" . }} unconditionally to all deployments (deployment-postgres.yaml - line 49).

The problem:

  • Adding a new field to pod template spec triggers a rolling update
  • Rolling update recreates the pod
  • Pod recreation wipes storage → complete data loss

Why this is a breaking change:

  • The default serviceAccount.create: false means the helper returns "default"
  • Even though the value is "default", adding the field itself changes the pod spec
  • Kubernetes treats this as a configuration change requiring pod recreation
  • This affects ALL existing deployments, not just those using ServiceAccount features

💡 Fix Description

Make serviceAccountName conditional - only add it when serviceAccount.create: true:

# Before (unconditional - BREAKS existing deployments)
serviceAccountName: {{ include "mcp-stack.serviceAccountName" . }}

# After (conditional - backward compatible)
{{- if .Values.serviceAccount.create }}
serviceAccountName: {{ include "mcp-stack.serviceAccountName" . }}
{{- end }}

Benefits:

  1. Backward compatible: Existing deployments with create: false (default) won't have field added
  2. No pod recreation: Without field change, Kubernetes doesn't trigger rolling update
  3. Preserves data: Ephemeral storage not wiped for users without persistence
  4. Opt-in behavior: Users who need ServiceAccount must explicitly enable it
  5. Still functional: ServiceAccount support works when explicitly enabled

🧪 Verification

Check Command Status
Existing deployment upgrades without pod recreation Tested on live cluster
Data persists after upgrade Verified database contents

Manual Testing:

  1. Deployed chart with persistence.enabled: false and serviceAccount.create: false
  2. Populated database with test data
  3. Upgraded to patched chart version
  4. Verified postgres pod was NOT recreated
  5. Confirmed database data intact

✅ Checklist

  • Backward compatible with existing deployments
  • Verified no database pod recreation for default configuration

🔗 Related Issues

Fixes data loss regression introduced in #1718

@crivetimihai crivetimihai self-assigned this Jan 21, 2026
Signed-off-by: Chris PC <chrispc@li-4dc2bf4c-325d-11b2-a85c-b68e8b1fc307.ibm.com>
@crivetimihai crivetimihai force-pushed the update_deployment_postgres branch from 8ce34e9 to ebdc8a2 Compare January 21, 2026 00:40
@crivetimihai crivetimihai merged commit 8400e19 into main Jan 21, 2026
38 checks passed
@crivetimihai crivetimihai deleted the update_deployment_postgres branch January 21, 2026 00:40
kcostell06 pushed a commit to kcostell06/mcp-context-forge that referenced this pull request Feb 24, 2026
Signed-off-by: Chris PC <chrispc@li-4dc2bf4c-325d-11b2-a85c-b68e8b1fc307.ibm.com>
Co-authored-by: Chris PC <chrispc@li-4dc2bf4c-325d-11b2-a85c-b68e8b1fc307.ibm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants