qa: enable MDS export killpoint tests#28004
Conversation
6893b36 to
bcc8b22
Compare
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
ping |
|
@batrick Sorry for the delay. Was caught up in the export ephemeral pin work . Will update next week. |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
ping :) |
0f91783 to
db57b4b
Compare
a0045aa to
bd66a09
Compare
2ff25a8 to
2d0d381
Compare
2d0d381 to
ef0229a
Compare
ef0229a to
c3e055a
Compare
8a62f5e to
4ee87ab
Compare
980888f to
890feb1
Compare
|
Please rebase and run through teuthology ( |
890feb1 to
422e3f5
Compare
batrick
left a comment
There was a problem hiding this comment.
flake8 run-test: commands[0] | flake8 --select=F,E9 --exclude=venv,.tox
./tasks/cephfs/filesystem.py:858:72: F821 undefined name 'cmp'
./tasks/cephfs/test_exports.py:526:9: F841 local variable 'all_daemons' is assigned to but never used
./tasks/cephfs/test_exports.py:551:56: F821 undefined name 'org_files'
./tasks/cephfs/test_exports.py:552:56: F821 undefined name 'out'
Signed-off-by: Sidharth Anupkrishnan <sanupkri@redhat.com>
422e3f5 to
0c751c2
Compare
|
https://pulpito.ceph.com/sidharthanup-2020-07-09_12:17:09-multimds-octopus-distro-basic-smithi/ - seems like its failing for (import, export) killpoints (7, 10) and (9, 13). Its stuck during verify_data() waiting on ls here: https://github.com/ceph/ceph/pull/28004/files#diff-d5f17ebd745250b57be2b89d4ba48efbR545 when called here: https://github.com/ceph/ceph/pull/28004/files#diff-d5f17ebd745250b57be2b89d4ba48efbR609 . Its passing for most other pairs of killpoints. I've scheduled a run just for (7,10) - https://pulpito.ceph.com/sidharthanup-2020-07-09_20:05:33-multimds-octopus-distro-basic-smithi/ . Let me confirm if its the same behaviour. |
|
@batrick Import killpoint = 7 will cause test failure. The reason for this is that before this killpoint (https://github.com/sidharthanup/ceph/blob/wip-multimdss-killpoint-test/src/mds/Migrator.cc#L3026) is hit, there is a prepare_force_open_sessions() method being called(https://github.com/sidharthanup/ceph/blob/wip-multimdss-killpoint-test/src/mds/Migrator.cc#L2699) in handle_export_dir() and this call marks a dirty open session which later gets persisted as |
|
jenkins test make check |
Yes, this is a genuine bug. Open a tracker ticket. It'd great your tests found a new bug! |
Ack. Yea, last week I was wondering whether it's something wrong with my tests or not. I'ts nice to know it caught undesirable behavior! |
|
@sidharthanup have you looked into this failure too? |
|
jenkins test dashboard backend |
|
@batrick It's the same issue with killpoint 9. Client hasn't started the connection with the mds yet so when it goes down, client gets blacklisted on replay on the new MDS. Should be fixed with the patch that I'm working on. |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
Blocked on #36227 |
|
Superseded by #41969 |
Export Path Killpoint test for multimds recovery
Fixes: http://tracker.ceph.com/issues/17835
Signed-off-by: Sidharth Anupkrishnan sanupkri@redhat.com