Skip to content

qa/suites/rados: do not test with el7#35719

Merged
tchaikov merged 2 commits intoceph:masterfrom
tchaikov:wip-cephadm-sans-el7
Jun 24, 2020
Merged

qa/suites/rados: do not test with el7#35719
tchaikov merged 2 commits intoceph:masterfrom
tchaikov:wip-cephadm-sans-el7

Conversation

@tchaikov
Copy link
Contributor

since we stopped building master on el7, there is no need to test
cephadm with el7 anymore.

Signed-off-by: Kefu Chai kchai@redhat.com

Checklist

  • References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard backend
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

since we stopped building master on el7, there is no need to test
cephadm with el7 anymore.

Signed-off-by: Kefu Chai <kchai@redhat.com>
@tchaikov tchaikov force-pushed the wip-cephadm-sans-el7 branch from b8f4980 to 2ee1ba2 Compare June 23, 2020 07:42
@tchaikov
Copy link
Contributor Author

this change depends on ceph/teuthology#1520

in this test, older ceph clients are installed on el7, but the ceph
cluster is deployed using cephadm, which in turn pulls ceph container
images built using the ceph being tested on el8.

since we've dropped the build of master on el7, there is no need to
verify if ceph package is available if cephadm is used for deploying the
cluster.

Signed-off-by: Kefu Chai <kchai@redhat.com>
@tchaikov tchaikov force-pushed the wip-cephadm-sans-el7 branch from 2ee1ba2 to 743edd0 Compare June 23, 2020 09:18
@tchaikov tchaikov merged commit 68e1db0 into ceph:master Jun 24, 2020
@tchaikov tchaikov deleted the wip-cephadm-sans-el7 branch June 24, 2020 01:39
@sebastian-philipp
Copy link
Contributor

sebastian-philipp commented Jul 20, 2020

While looking into https://tracker.ceph.com/issues/46529, it turned out that this PR conflicts with #32377

Right now, this conflict makes suites/rados/thrash-old-clients the only suite that tests cephadm on Centos 7. And it turns out that we likely have a problem with podman on CentOS 7.6. Thus I see two options:

  1. We revert this PR and continue to test cephadm on CentOS 7. Then, we'd need someone with in-depth podman experience to debug the issue we see with CentOS 7.6.
  2. Alternatively, we revert qa/suites/rados/thrash-old-clients: use cephadm #32377 and test thrash-old-clients using the traditional package based deployment.

@tchaikov @jdurgin @liewegas . This depends a bit on if you see the need to support cephadm on Centos 7.

Might be related to containers/podman#2553 (comment)

@tchaikov
Copy link
Contributor Author

tchaikov commented Jul 20, 2020

i think this change was originated from a cleanup to drop el7 bits from ceph.spec.in and install-deps.sh as i learned from @jdurgin that we can stop building ceph on el7, which led to a series of changes

  • stop building master on el7
  • stop testing master on el7

i am not sure what "conflict" here stands for. do you mean we are using a combination of cephadm+el7 for testing which is not covered by our testing anymore?

@sebastian-philipp
Copy link
Contributor

to recap, the conflict is: #32377 still tests cephadm on CentOS 7 in suites/rados/thrash-old-clients, despite this PR here removed the test on CenOS 7 for rados/cephadm.

Things should still be fine, but unfortunately they aren't: I'm seeing test failures in suites/rados/thrash-old-clients which I cannot reproduce in suites/rados/cephadm. And they might be caused due to the older kernel of CentOS 7.

If we don't need cephadm on CenOS 7, then we should revert #32377 to use the traditional deployment for suites/rados/thrash-old-clients

@sebastian-philipp
Copy link
Contributor

#36321 is the revert of #32377

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants