Skip to content

mgr/cephadm: spec.virtual_ip param should be used by the ingress daemon#44374

Merged
sebastian-philipp merged 1 commit intoceph:masterfrom
fmount:ingress_vip
Jan 5, 2022
Merged

mgr/cephadm: spec.virtual_ip param should be used by the ingress daemon#44374
sebastian-philipp merged 1 commit intoceph:masterfrom
fmount:ingress_vip

Conversation

@fmount
Copy link
Contributor

@fmount fmount commented Dec 21, 2021

When the ingress spec is built a virtual_ip parameter is provided and it's expected to see the
haproxy instance using the defined value.
The defined VIP is properly configured using the keepalived instance and, as per doc [1],
the ingress should be able to use this value as entrypoint for the haproxy frontend.

[1] https://docs.ceph.com/en/latest/cephadm/services/rgw/#high-availability-service-for-rgw

Fixes: https://tracker.ceph.com/issues/53684
Signed-off-by: Francesco Pantano fpantano@redhat.com

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Updates documentation if necessary
    • Includes tests for new functionality or reproducer for bug
  • Component impact
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

@fmount fmount requested a review from a team as a code owner December 21, 2021 08:52
@fmount
Copy link
Contributor Author

fmount commented Dec 21, 2021

/cc @sebastian-philipp

@sebastian-philipp
Copy link
Contributor

would be great, if you could look into writing a unit test similar to

class TestMonitoring:
@patch("cephadm.serve.CephadmServe._run_cephadm")
def test_alertmanager_config(self, _run_cephadm, cephadm_module: CephadmOrchestrator):
_run_cephadm.side_effect = async_side_effect(('{}', '', 0))
with with_host(cephadm_module, 'test'):
with with_service(cephadm_module, AlertManagerSpec()):
y = dedent("""
# This file is generated by cephadm.
# See https://prometheus.io/docs/alerting/configuration/ for documentation.
global:
resolve_timeout: 5m
route:
receiver: 'default'
routes:
- group_by: ['alertname']
group_wait: 10s
group_interval: 10s
repeat_interval: 1h
receiver: 'ceph-dashboard'
receivers:
- name: 'default'
webhook_configs:
- name: 'ceph-dashboard'
webhook_configs:
- url: 'http://[::1]:8080/api/prometheus_receiver'
""").lstrip()
_run_cephadm.assert_called_with(
'test',
'alertmanager.test',
'deploy',
[
'--name', 'alertmanager.test',
'--meta-json', '{"service_name": "alertmanager", "ports": [9093, 9094], "ip": null, "deployed_by": [], "rank": null, "rank_generation": null}',
'--config-json', '-', '--tcp-ports', '9093 9094'
],
stdin=json.dumps({"files": {"alertmanager.yml": y}, "peers": []}),
image='')\

to verify this change. Let's avoid future regressions here.

@fmount
Copy link
Contributor Author

fmount commented Dec 22, 2021

would be great, if you could look into writing a unit test similar to

class TestMonitoring:
@patch("cephadm.serve.CephadmServe._run_cephadm")
def test_alertmanager_config(self, _run_cephadm, cephadm_module: CephadmOrchestrator):
_run_cephadm.side_effect = async_side_effect(('{}', '', 0))
with with_host(cephadm_module, 'test'):
with with_service(cephadm_module, AlertManagerSpec()):
y = dedent("""
# This file is generated by cephadm.
# See https://prometheus.io/docs/alerting/configuration/ for documentation.
global:
resolve_timeout: 5m
route:
receiver: 'default'
routes:
- group_by: ['alertname']
group_wait: 10s
group_interval: 10s
repeat_interval: 1h
receiver: 'ceph-dashboard'
receivers:
- name: 'default'
webhook_configs:
- name: 'ceph-dashboard'
webhook_configs:
- url: 'http://[::1]:8080/api/prometheus_receiver'
""").lstrip()
_run_cephadm.assert_called_with(
'test',
'alertmanager.test',
'deploy',
[
'--name', 'alertmanager.test',
'--meta-json', '{"service_name": "alertmanager", "ports": [9093, 9094], "ip": null, "deployed_by": [], "rank": null, "rank_generation": null}',
'--config-json', '-', '--tcp-ports', '9093 9094'
],
stdin=json.dumps({"files": {"alertmanager.yml": y}, "peers": []}),
image='')\

to verify this change. Let's avoid future regressions here.

ack, I'm going to draft a unit test by EOD

@fmount fmount force-pushed the ingress_vip branch 2 times, most recently from b73a58e to 950bdaa Compare December 22, 2021 13:58
…e ingress daemon

When the ingress spec is built a virtual_ip parameter is provided in the spec and it's
expected to see the haproxy instance using the defined value.
The defined VIP is properly configured using the keepalived instance and, as per doc [1],
the ingress should be able to use this value as entrypoint for the haproxy frontend.
This patch also introduces a basic unit test for the IngressService with the purpose of
validating the config files generated for both haproxy and keepalived.

[1] https://docs.ceph.com/en/latest/cephadm/services/rgw/#high-availability-service-for-rgw

Fixes: https://tracker.ceph.com/issues/53684
Signed-off-by: Francesco Pantano <fpantano@redhat.com>
@sebastian-philipp
Copy link
Contributor

@sebastian-philipp sebastian-philipp merged commit 4b5b32b into ceph:master Jan 5, 2022
@fmount
Copy link
Contributor Author

fmount commented Jan 13, 2022

@sebastian-philipp @adk3798 should we backport this PR to pacific as it solves an issue that can be hit in 5.1?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants