-
Notifications
You must be signed in to change notification settings - Fork 136
[monitoring] Add SLACK_SEVERITY_FILTER field and VMAgent for tenant monitoring #1712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughIntroduce array-based disabledSeverity fields for Slack and Telegram and emit JSON-quoted severity filters as environment variables in the Alerta deployment template when corresponding integrations are enabled. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @IvanHunters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the monitoring-agents package by integrating Alertmanager, a crucial component for handling and routing alerts. The changes establish a complete alert notification pipeline, primarily focusing on Slack integration, and provide the necessary Kubernetes resources and configuration defaults to deploy and manage Alertmanager effectively. This ensures that critical system alerts can be efficiently processed and dispatched to relevant communication channels. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces Alertmanager with Slack integration to the monitoring stack. While this is a valuable addition, the current implementation has several areas that need improvement to be considered production-ready. My review includes critical feedback on using a Deployment for a stateful service like Alertmanager, which should be a StatefulSet to handle state and high availability correctly. I've also pointed out issues with hardcoded values, conditional configuration for the Slack integration to prevent errors, and missing resource definitions. Addressing these points will significantly improve the robustness and maintainability of this new component.
| @@ -0,0 +1,54 @@ | |||
| {{- if .Values.alertmanager.enabled }} | |||
| apiVersion: apps/v1 | |||
| kind: Deployment | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a stateful application like Alertmanager, a StatefulSet is more appropriate than a Deployment. A StatefulSet provides stable network identifiers and persistent storage, which are crucial for features like alert deduplication, silences, and high availability.
Using a Deployment with an emptyDir volume (as configured on line 53) will cause all state to be lost on pod restarts. Furthermore, disabling clustering via --cluster.listen-address= (line 26) will lead to duplicate notifications if you scale to more than one replica.
| route: | ||
| group_by: ['alertname'] | ||
| group_wait: 10s | ||
| group_interval: 10s | ||
| repeat_interval: 1h | ||
| receiver: 'slack' | ||
| routes: | ||
| - match: | ||
| alertname: Watchdog | ||
| receiver: 'null' | ||
| receivers: | ||
| - name: 'null' | ||
| - name: 'slack' | ||
| slack_configs: | ||
| - api_url: {{ .Values.alertmanager.slack.webhookUrl | quote }} | ||
| channel: {{ .Values.alertmanager.slack.channel | quote }} | ||
| username: {{ .Values.alertmanager.slack.username | quote }} | ||
| icon_emoji: {{ .Values.alertmanager.slack.iconEmoji | quote }} | ||
| title: '{{`{{ .GroupLabels.alertname }}`}}' | ||
| text: '{{`{{ range .Alerts }}{{ .Annotations.summary }} - {{ .Annotations.description }}{{ end }}`}}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The configuration unconditionally sets 'slack' as the default receiver and defines the slack_configs. If alertmanager.slack.webhookUrl is not provided in values.yaml, Alertmanager will fail to send notifications because the api_url will be empty.
To prevent this, you should conditionally configure the receiver and the route based on the presence of the webhookUrl.
route:
group_by: ['alertname']
group_wait: 10s
group_interval: 10s
repeat_interval: 1h
receiver: {{ if .Values.alertmanager.slack.webhookUrl }}'slack'{{ else }}'null'{{ end }}
routes:
- match:
alertname: Watchdog
receiver: 'null'
receivers:
- name: 'null'
{{- if .Values.alertmanager.slack.webhookUrl }}
- name: 'slack'
slack_configs:
- api_url: {{ .Values.alertmanager.slack.webhookUrl | quote }}
channel: {{ .Values.alertmanager.slack.channel | quote }}
username: {{ .Values.alertmanager.slack.username | quote }}
icon_emoji: {{ .Values.alertmanager.slack.iconEmoji | quote }}
title: '{{`{{ .GroupLabels.alertname }}`}}'
text: '{{`{{ range .Alerts }}{{ .Annotations.summary }} - {{ .Annotations.description }}{{ end }}`}}'
{{- end }}| image: "prom/alertmanager:v0.27.0" | ||
| imagePullPolicy: IfNotPresent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Alertmanager image, tag, and pull policy are hardcoded. It's a best practice to make these configurable in values.yaml to simplify upgrades.
Please add an image object to the alertmanager section in values.yaml like this:
alertmanager:
# ... other values
image:
repository: prom/alertmanager
tag: v0.27.0
pullPolicy: IfNotPresentThen use these values in the template.
image: "{{ .Values.alertmanager.image.repository }}:{{ .Values.alertmanager.image.tag }}"
imagePullPolicy: {{ .Values.alertmanager.image.pullPolicy }}| name: alertmanager | ||
| spec: | ||
| replicas: {{ .Values.alertmanager.replicas }} | ||
| minReplicas: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minReplicas is hardcoded to 1. If a user sets alertmanager.replicas to 0 in values.yaml to disable the deployment, this WorkloadMonitor will have replicas: 0 and minReplicas: 1, which is an impossible condition to satisfy and will likely result in a persistent non-operational status for this workload. minReplicas should be 0 when the desired replica count is 0.
minReplicas: {{ if gt (int .Values.alertmanager.replicas) 0 }}1{{ else }}0{{ end }}| url: http://vmselect-shortterm.tenant-root.svc:8481/select/0/prometheus | ||
| remoteWrite: | ||
| url: http://vminsert-shortterm.tenant-root.svc:8480/insert/0/prometheus/api/v1/write | ||
| resources: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The resources for VMAlert are empty. It is a best practice to specify resource requests and limits for all workloads to ensure cluster stability and proper scheduling. These should be configurable via values.yaml.
Example for values.yaml:
vmalert:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi resources: {{- toYaml .Values.vmalert.resources | nindent 4 }}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
packages/system/monitoring-agents/templates/alertmanager-deployment.yaml(1 hunks)packages/system/monitoring-agents/templates/alertmanager-secret.yaml(1 hunks)packages/system/monitoring-agents/templates/alertmanager-service.yaml(1 hunks)packages/system/monitoring-agents/templates/alertmanager-workloadmonitor.yaml(1 hunks)packages/system/monitoring-agents/templates/vmalert.yaml(1 hunks)packages/system/monitoring-agents/values.yaml(1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
packages/system/monitoring-agents/templates/alertmanager-secret.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/monitoring-agents/templates/alertmanager-deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/monitoring-agents/templates/alertmanager-workloadmonitor.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/monitoring-agents/templates/alertmanager-service.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/monitoring-agents/templates/vmalert.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build
🔇 Additional comments (8)
packages/system/monitoring-agents/values.yaml (1)
424-438: Alert manager configuration defaults look reasonable.The feature is opt-in (disabled by default), resource limits are conservative, and Slack webhook is empty (requires user configuration to activate). Values are well-structured.
packages/system/monitoring-agents/templates/alertmanager-deployment.yaml (1)
23-26: Verify clustering behavior if replicas are scaled beyond 1.Line 26 disables clustering with
--cluster.listen-address=(empty value). This works correctly for the defaultreplicas: 1, but if someone increases replicas, clustering will remain disabled, potentially causing alert deduplication and routing failures across multiple instances. Consider either:
- Making clustering conditional based on replica count, or
- Adding documentation warning users not to scale replicas without enabling clustering
Is single-replica-only the intended design, or should multi-replica deployments support clustering?
packages/system/monitoring-agents/templates/alertmanager-service.yaml (1)
1-17: Service definition is correct. YAMLlint error is a false positive.The Service correctly exposes Alertmanager on port 9093 with proper selector. The YAMLlint error about "syntax error: expected the node content, but found '-'" is a false positive—Helm's
{{-syntax is valid and removes whitespace. This can be safely ignored.packages/system/monitoring-agents/templates/alertmanager-secret.yaml (2)
11-16: SMTP defaults are placeholder values and will not function.Lines 12-16 configure SMTP to
localhost:587with empty credentials and no TLS. These are placeholder defaults that won't work for actual email notifications. This is likely intentional for an MVP, but users enabling Alertmanager should understand that SMTP requires explicit configuration outside of these values.Verify that documentation exists explaining how to configure SMTP credentials if users want email notifications (or confirm SMTP is out of scope for this PR).
29-38: Slack configuration is properly templated and safe.Webhook URL defaults to empty, preventing accidental misconfiguration. Channel, username, and iconEmoji have sensible defaults and are configurable via values. Alert templates correctly use Helm's
{{...}}escaping to render Alertmanager template variables (alertname, summary, description).packages/system/monitoring-agents/templates/alertmanager-workloadmonitor.yaml (1)
1-14: WorkloadMonitor definition is correct.The selector matches the Alertmanager deployment labels, replicas are sourced from values for consistency, and minReplicas constraint is reasonable. No issues.
packages/system/monitoring-agents/templates/vmalert.yaml (2)
7-8: Hardcoded service URLs limit flexibility and assume specific deployment topology.Lines 7-8 and 12-15 hardcode VMAlert datasource and write URLs to
vmselect-shorttermandvminsert-shorttermservices. These URLs are not configurable from values and assume a specific cozystack deployment topology. If the monitoring stack architecture changes, these URLs will break silently.Consider moving these URLs to
values.yamlas configurable parameters so deployments can adapt to different topologies.Verify that these hardcoded service names and paths are correct and stable for cozystack deployments. If topology is expected to vary, these should be configurable.
Also applies to: 12-15
17-17: Verify thatselectAllByDefault: trueis intentional.This setting causes VMAlert to evaluate ALL PrometheusRules in the cluster. While this may be desired, it's very broad and could lead to unexpected rule evaluation or high resource load. Confirm this is the intended behavior, or consider scoping rules by namespace/labels for better control.
Is selecting all PrometheusRules cluster-wide the expected behavior, or should rule evaluation be scoped (e.g., to specific namespaces or with label selectors)?
| url: http://vmselect-shortterm.tenant-root.svc:8481/select/0/prometheus | ||
| remoteWrite: | ||
| url: http://vminsert-shortterm.tenant-root.svc:8480/insert/0/prometheus/api/v1/write | ||
| resources: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
VMAlert has no resource requests or limits.
Line 16 sets resources: {}, which means VMAlert pod has no guaranteed memory/CPU and no upper limits. This could lead to resource contention or out-of-memory kills. Provide sensible defaults (similar to Alertmanager's 100m/128Mi request) or make it configurable from values.
Apply this diff to provide reasonable resource defaults:
- resources: {}
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 500m
+ memory: 512MiAdjust these values based on your evaluation interval and rule complexity, or expose them in values.yaml for flexibility.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resources: {} | |
| resources: | |
| requests: | |
| cpu: 100m | |
| memory: 128Mi | |
| limits: | |
| cpu: 500m | |
| memory: 512Mi |
🤖 Prompt for AI Agents
In packages/system/monitoring-agents/templates/vmalert.yaml around line 16, the
VMAlert manifest currently has resources: {} which provides no CPU/memory
requests or limits; update the template to set sensible default resource
requests and limits (e.g., requests: cpu 100m, memory 128Mi and limits: cpu
250m, memory 256Mi) and make these values configurable via values.yaml (use Helm
template placeholders for values.alertmanager.resources.* with the defaults), so
operators can override them without changing the chart.
kvaps
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will route all alerts bypassing alerta, which is IRM and we already have Slack configuration for it.
All you need is to add SLACK_SEVERITY_FILTER enviroment variable for slack that works the similar way like TELEGRAM_DISABLE_NOTIFICATION_SEVERITY enviroment variable
Please RTFS
8ca7e80 to
3973af2
Compare
3973af2 to
a632194
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
packages/extra/monitoring/templates/alerta/alerta.yaml(1 hunks)packages/extra/monitoring/values.schema.json(1 hunks)packages/extra/monitoring/values.yaml(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build
🔇 Additional comments (3)
packages/extra/monitoring/values.schema.json (1)
28-32: Schema definition looks good.The
severityFilterproperty is properly defined with appropriate type, default, and documentation. It's correctly marked as optional (not in the required array), and the example format is clear.packages/extra/monitoring/values.yaml (2)
104-106: Documentation is clear and consistent.The JSDoc typedef properly documents the new
severityFilteroptional field with a helpful example. This aligns well with the schema definition and provides good guidance to users configuring Slack alerts.
134-136: Default configuration value is sensible.Setting
severityFilterto an empty string provides a good default that allows opt-in filtering without breaking existing configurations. The placement within the slack alert configuration block is correct.
| {{- if .Values.alerta.alerts.slack.url }} | ||
| - name: "SLACK_WEBHOOK_URL" | ||
| value: "{{ .Values.alerta.alerts.slack.url }}" | ||
| - name: SLACK_SEVERITY_FILTER | ||
| value: "{{ .Values.alerta.alerts.slack.severityFilter }}" | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Alerta 9.0.4 Slack plugin SLACK_SEVERITY_FILTER environment variable
💡 Result:
Do you want (A) what SLACK_SEVERITY_FILTER does and allowed values, (B) how to set it for Alerta 9.0.4, or (C) help locating it in the Slack plugin source? I can fetch the exact plugin code and examples once you confirm.
References I can look at: Alerta 9.0 docs (configuration/plugins) and the alerta-contrib plugins repo. [1][2]
[1] https://docs.alerta.io/configuration.html
[2] https://github.com/alerta/alerta-contrib (plugins list)
🌐 Web query:
Alerta Slack alert plugin documentation environment variables
💡 Result:
Short answer — where to find it and how to proceed:
- The Slack notification plugin for Alerta is provided in the alerta-contrib repository (plugins/slack). See the repo. [1]
- Alerta plugin enabling and environment-variable behaviour are documented in the main Alerta docs: enable the plugin by adding "slack" to PLUGINS in your alertad.conf (or PLUGINS env var) and note that environment variables override config file settings. [2][3]
- If you need the exact plugin-specific environment variables (webhook URL, channel, username, etc.), they are defined by the Slack plugin code/README in alerta-contrib — I can fetch and list the exact env var names used by that plugin (or paste the relevant lines from slack.py). Would you like me to pull those exact variables now?
Sources:
- alerta-contrib plugins (slack). [GitHub alerta/alerta-contrib — plugins/slack]. (github.com)
- Alerta configuration / plugins documentation (how to enable plugins). (docs.alerta.io)
- Docker image notes showing PLUGINS env var usage. (github.com)
Citations:
- 1: https://github.com/alerta/alerta-contrib/tree/master/plugins/slack
- 2: https://docs.alerta.io/gettingstarted/tutorial-3-plugins.html?utm_source=openai
- 3: https://github.com/alerta/docker-alerta?utm_source=openai
🏁 Script executed:
# Search for Alerta repository and Slack plugin information
curl -s https://api.github.com/repos/alerta/alerta/contents/alerta/plugins | head -100Repository: cozystack/cozystack
Length of output: 5607
🏁 Script executed:
# Fetch the Slack plugin source from alerta-contrib
curl -s https://raw.githubusercontent.com/alerta/alerta-contrib/master/plugins/slack/alerta_slack.py | head -150Repository: cozystack/cozystack
Length of output: 5908
🏁 Script executed:
# Also check for README or documentation in the slack plugin directory
curl -s https://raw.githubusercontent.com/alerta/alerta-contrib/master/plugins/slack/README.mdRepository: cozystack/cozystack
Length of output: 5569
Fix SLACK_SEVERITY_FILTER environment variable format and default handling.
The SLACK_SEVERITY_FILTER environment variable is supported in the Alerta Slack plugin, but there are two issues with the current implementation:
-
Format incompatibility: The plugin uses
ast.literal_eval()to parse the environment variable, expecting Python literal syntax (e.g.,['critical', 'major']). An empty string default will fail to parse and silently fall back to an empty list[], which disables filtering entirely. -
Missing documentation: The variable is not documented in the Slack plugin README, creating ambiguity about expected format and behavior.
Update .Values.alerta.alerts.slack.severityFilter to either:
- Use a valid Python list literal format (e.g.,
['critical', 'major']) instead of an empty string, or - Remove the environment variable configuration if severity filtering is not needed
- name: SLACK_SEVERITY_FILTER
value: "{{ .Values.alerta.alerts.slack.severityFilter }}"
🤖 Prompt for AI Agents
In packages/extra/monitoring/templates/alerta/alerta.yaml around lines 136 to
141, the SLACK_SEVERITY_FILTER env var is being set to whatever
.Values.alerta.alerts.slack.severityFilter provides, but the Alerta Slack plugin
expects a Python literal list (e.g. ['critical','major']) and will fail or
silently fallback if given an empty string; update the template to either (a)
only emit the SLACK_SEVERITY_FILTER env var when
.Values.alerta.alerts.slack.severityFilter is a non-empty, properly formatted
Python list literal, or (b) remove the env var output when no severity filter is
desired, and update the chart README to document the expected Python list
literal format and that omitting the variable disables filtering.
…onitoring What this PR does This PR introduces the SLACK_SEVERITY_FILTER environment variable in the Alerta deployment to enable filtering of alert severities for Slack notifications based on the disabledSeverity configuration. Additionally, it adds a VMAgent resource template for scraping metrics within tenant namespaces, improving monitoring granularity and control. ```release-note [monitoring] Add SLACK_SEVERITY_FILTER for filtering Slack alert severities and VMAgent configuration for tenant-specific metrics scraping. ``` Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
a632194 to
df3a409
Compare
kvaps
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
Backport failed for Please cherry-pick the changes locally and resolve any conflicts. git fetch origin release-0.38
git worktree add -d .worktree/backport-1712-to-release-0.38 origin/release-0.38
cd .worktree/backport-1712-to-release-0.38
git switch --create backport-1712-to-release-0.38
git cherry-pick -x df3a4091420e9ee056beedea7e9f2892dee0e054 |
…onitoring (#1712) [monitoring] Add SLACK_SEVERITY_FILTER field and VMAgent for tenant monitoring What this PR does This PR introduces the SLACK_SEVERITY_FILTER environment variable in the Alerta deployment to enable filtering of alert severities for Slack notifications based on the disabledSeverity configuration. Additionally, it adds a VMAgent resource template for scraping metrics within tenant namespaces, improving monitoring granularity and control. ```release-note [monitoring] Add SLACK_SEVERITY_FILTER for filtering Slack alert severities and VMAgent configuration for tenant-specific metrics scraping. ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> * **New Features** * Added configurable severity filtering for Telegram alerts. * Extended Slack severity filtering to accept lists of severities. * **Bug Fixes / Behavior** * Severity settings now accept arrays (multiple severities) instead of single comma-separated strings. * **Documentation** * Updated configuration docs and examples to show list-style severity settings. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…onitoring (#1712) [monitoring] Add SLACK_SEVERITY_FILTER field and VMAgent for tenant monitoring What this PR does This PR introduces the SLACK_SEVERITY_FILTER environment variable in the Alerta deployment to enable filtering of alert severities for Slack notifications based on the disabledSeverity configuration. Additionally, it adds a VMAgent resource template for scraping metrics within tenant namespaces, improving monitoring granularity and control. ```release-note [monitoring] Add SLACK_SEVERITY_FILTER for filtering Slack alert severities and VMAgent configuration for tenant-specific metrics scraping. ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added configurable severity filtering for Telegram alerts. * Extended Slack severity filtering to accept lists of severities. * **Bug Fixes / Behavior** * Severity settings now accept arrays (multiple severities) instead of single comma-separated strings. * **Documentation** * Updated configuration docs and examples to show list-style severity settings. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…onitoring (#1712) [monitoring] Add SLACK_SEVERITY_FILTER field and VMAgent for tenant monitoring What this PR does This PR introduces the SLACK_SEVERITY_FILTER environment variable in the Alerta deployment to enable filtering of alert severities for Slack notifications based on the disabledSeverity configuration. Additionally, it adds a VMAgent resource template for scraping metrics within tenant namespaces, improving monitoring granularity and control. ```release-note [monitoring] Add SLACK_SEVERITY_FILTER for filtering Slack alert severities and VMAgent configuration for tenant-specific metrics scraping. ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added configurable severity filtering for Telegram alerts. * Extended Slack severity filtering to accept lists of severities. * **Bug Fixes / Behavior** * Severity settings now accept arrays (multiple severities) instead of single comma-separated strings. * **Documentation** * Updated configuration docs and examples to show list-style severity settings. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…onitoring (#1712) [monitoring] Add SLACK_SEVERITY_FILTER field and VMAgent for tenant monitoring What this PR does This PR introduces the SLACK_SEVERITY_FILTER environment variable in the Alerta deployment to enable filtering of alert severities for Slack notifications based on the disabledSeverity configuration. Additionally, it adds a VMAgent resource template for scraping metrics within tenant namespaces, improving monitoring granularity and control. ```release-note [monitoring] Add SLACK_SEVERITY_FILTER for filtering Slack alert severities and VMAgent configuration for tenant-specific metrics scraping. ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> * **New Features** * Added configurable severity filtering for Telegram alerts. * Extended Slack severity filtering to accept lists of severities. * **Bug Fixes / Behavior** * Severity settings now accept arrays (multiple severities) instead of single comma-separated strings. * **Documentation** * Updated configuration docs and examples to show list-style severity settings. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…onitoring (#1712) [monitoring] Add SLACK_SEVERITY_FILTER field and VMAgent for tenant monitoring What this PR does This PR introduces the SLACK_SEVERITY_FILTER environment variable in the Alerta deployment to enable filtering of alert severities for Slack notifications based on the disabledSeverity configuration. Additionally, it adds a VMAgent resource template for scraping metrics within tenant namespaces, improving monitoring granularity and control. ```release-note [monitoring] Add SLACK_SEVERITY_FILTER for filtering Slack alert severities and VMAgent configuration for tenant-specific metrics scraping. ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added configurable severity filtering for Telegram alerts. * Extended Slack severity filtering to accept lists of severities. * **Bug Fixes / Behavior** * Severity settings now accept arrays (multiple severities) instead of single comma-separated strings. * **Documentation** * Updated configuration docs and examples to show list-style severity settings. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
[monitoring] Add SLACK_SEVERITY_FILTER field and VMAgent for tenant monitoring
What this PR does
This PR introduces the SLACK_SEVERITY_FILTER environment variable in the Alerta deployment to enable
filtering of alert severities for Slack notifications based on the disabledSeverity configuration.
Additionally, it adds a VMAgent resource template for scraping metrics within tenant namespaces, improving
monitoring granularity and control.
Summary by CodeRabbit
New Features
Bug Fixes / Behavior
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.