Skip to content

qa/workunits/cephtool: check_response didn't find erasure-code string#46869

Merged
vshankar merged 1 commit intoceph:mainfrom
nmshelke:fix-56384
Jun 30, 2022
Merged

qa/workunits/cephtool: check_response didn't find erasure-code string#46869
vshankar merged 1 commit intoceph:mainfrom
nmshelke:fix-56384

Conversation

@nmshelke
Copy link
Contributor

@nmshelke nmshelke commented Jun 28, 2022

  1. If data or metadata pool is already in-use by filesystem
    then it is not allowed to reuse the same pool for another
    filesystems.

  2. Test is failing because above(1) restrictions/checks comes
    before checking erasure-code pools. Hence test is failing
    and not finding expected error string in output.

  3. Proposed fix checks newly added error string instead of
    'erasure-code'.

  4. Also adding new tests to verify string 'erasure-code'
    by passing --force option so that check for pools reuse(1)
    will be skipped and check for 'erasure-code' will be hit.

Fixes: https://tracker.ceph.com/issues/56384
Signed-off-by: Nikhilkumar Shelke nshelke@redhat.com

Contribution Guidelines

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox
  • jenkins test windows

@github-actions github-actions bot added the tests label Jun 28, 2022
@nmshelke nmshelke added the cephfs Ceph File System label Jun 28, 2022
@nmshelke nmshelke requested review from a team, ljflores and vshankar June 28, 2022 11:27
Copy link
Member

@ljflores ljflores left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cosmetic fix: Please make sure your commit has a title that goes with the subdirectory you made your changes in-- for you, this would be qa/workunits/cephtool.

@nmshelke nmshelke force-pushed the fix-56384 branch 2 times, most recently from e2960f0 to 9d982ed Compare June 28, 2022 13:59
@nmshelke nmshelke changed the title ceph/test.sh: check_response didn't find erasure-code in output qa/workunits/cephtool: check_response didn't find erasure-code string Jun 28, 2022
@nmshelke
Copy link
Contributor Author

Cosmetic fix: Please make sure your commit has a title that goes with the subdirectory you made your changes in-- for you, this would be qa/workunits/cephtool.

Updated commit message and title.

@nmshelke nmshelke requested a review from ljflores June 28, 2022 14:01
@ljflores
Copy link
Member

Scheduled some tests for this change here:

http://pulpito.front.sepia.ceph.com/lflores-2022-06-28_17:26:53-rados:singleton-bluestore-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/

./teuthology/virtualenv/bin/teuthology-suite -v -m smithi -c wip-yuri6-testing-2022-06-22-1419 -s rados:singleton-bluestore --suite-repo https://github.com/nmshelke/ceph --suite-branch fix-56384 --filter-out "rhel" -p 80

@nmshelke nmshelke force-pushed the fix-56384 branch 3 times, most recently from 31df7c0 to 2f55776 Compare June 29, 2022 05:11
@vshankar
Copy link
Contributor

Scheduled some tests for this change here:

http://pulpito.front.sepia.ceph.com/lflores-2022-06-28_17:26:53-rados:singleton-bluestore-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/

./teuthology/virtualenv/bin/teuthology-suite -v -m smithi -c wip-yuri6-testing-2022-06-22-1419 -s rados:singleton-bluestore --suite-repo https://github.com/nmshelke/ceph --suite-branch fix-56384 --filter-out "rhel" -p 80

@nmshelke I see the tests all failed, Mind taking a look?

1. If data or metadata pool is already in-use by filesystem
then it is not allowed to reuse the same pool for another
filesystems.

2. Test is failing because above(1) restrictions/checks comes
before checking erasure-code pools. Hence test is failing
and not finding expected error string in output.

3. Proposed fix checks newly added error string instead of
'erasure-code'.

4. Also adding new tests to verify string 'erasure-code'
by passing --force option so that check for pools reuse(1)
will be skipped and check for 'erasure-code' will be hit.

Fixes: https://tracker.ceph.com/issues/56384
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
@nmshelke
Copy link
Contributor Author

Scheduled some tests for this change here:
http://pulpito.front.sepia.ceph.com/lflores-2022-06-28_17:26:53-rados:singleton-bluestore-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/
./teuthology/virtualenv/bin/teuthology-suite -v -m smithi -c wip-yuri6-testing-2022-06-22-1419 -s rados:singleton-bluestore --suite-repo https://github.com/nmshelke/ceph --suite-branch fix-56384 --filter-out "rhel" -p 80

@nmshelke I see the tests all failed, Mind taking a look?

@ljflores @vshankar I have updated the fix.
Please find teuthology results at: http://pulpito.front.sepia.ceph.com/nshelke-2022-06-29_07:00:00-rados:singleton-bluestore-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/

Copy link
Member

@ljflores ljflores left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look good to me, and the teuthology tests are all green now.

@ljflores ljflores added the core label Jun 29, 2022
@ljflores
Copy link
Member

@vshankar I'll leave it to you to merge in case this needs any further CephFS testing. All good on the RADOS end.

@vshankar
Copy link
Contributor

@vshankar I'll leave it to you to merge in case this needs any further CephFS testing. All good on the RADOS end.

Looks good!

@vshankar vshankar merged commit 06c35ed into ceph:main Jun 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants