Skip to content

[DNM] skip tasks.cephfs.test_nfs.TestNFS.test_create_multiple_exports#35644

Closed
sebastian-philipp wants to merge 1 commit intoceph:masterfrom
sebastian-philipp:disable-test_create_multiple_exports
Closed

[DNM] skip tasks.cephfs.test_nfs.TestNFS.test_create_multiple_exports#35644
sebastian-philipp wants to merge 1 commit intoceph:masterfrom
sebastian-philipp:disable-test_create_multiple_exports

Conversation

@sebastian-philipp
Copy link
Contributor

Checklist

  • References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard backend
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

@varshar16
Copy link
Contributor

Skipping this test won't work. It gives new error.

2020-06-17T18:31:42.676 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-17T18:31:42.676 INFO:tasks.cephfs_test_runner:FAIL: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
2020-06-17T18:31:42.677 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-17T18:31:42.677 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-06-17T18:31:42.677 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_varshar16_ceph_wip-varsha-testing-nfs/qa/tasks/cephfs/test_nfs.py", line 106, in test_export_create_and_delete
2020-06-17T18:31:42.677 INFO:tasks.cephfs_test_runner:    self._test_delete_cluster()
2020-06-17T18:31:42.678 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_varshar16_ceph_wip-varsha-testing-nfs/qa/tasks/cephfs/test_nfs.py", line 69, in _test_delete_cluster
2020-06-17T18:31:42.678 INFO:tasks.cephfs_test_runner:    self.assertEqual("No services reported\n", orch_output)
2020-06-17T18:31:42.678 INFO:tasks.cephfs_test_runner:AssertionError: 'No services reported\n' != 'NAME              RUNNING  REFRESHED  AGE [234 chars]  \n'
2020-06-17T18:31:42.679 INFO:tasks.cephfs_test_runner:- No services reported
2020-06-17T18:31:42.679 INFO:tasks.cephfs_test_runner:+ NAME              RUNNING  REFRESHED  AGE  PLACEMENT    IMAGE NAME                                                          IMAGE ID      
2020-06-17T18:31:42.679 INFO:tasks.cephfs_test_runner:+ nfs.ganesha-test      1/0  4s ago     -    <unmanaged>  quay.ceph.io/ceph-ci/ceph:289329252c2ae943de01ac9f80a7a12c6964057e  956bacc1189b  
2020-06-17T18:31:42.679 INFO:tasks.cephfs_test_runner:
2020-06-17T18:31:42.680 INFO:tasks.cephfs_test_runner:
2020-06-17T18:31:42.680 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-17T18:31:42.680 INFO:tasks.cephfs_test_runner:Ran 2 tests in 57.262s
2020-06-17T18:31:42.680 INFO:tasks.cephfs_test_runner:
2020-06-17T18:31:42.681 INFO:tasks.cephfs_test_runner:FAILED (failures=1)
2020-06-17T18:31:42.681 INFO:tasks.cephfs_test_runner:

http://qa-proxy.ceph.com/teuthology/varsha-2020-06-17_18:09:32-rados-wip-varsha-testing-nfs-distro-basic-smithi/

@sebastian-philipp
Copy link
Contributor Author

meh

@varshar16
Copy link
Contributor

Is there some issue with quay? I am getting error on pulling images.
https://gist.github.com/varshar16/b793b0bc9422e21338bd6dcd46cc2a5d

@sebastian-philipp sebastian-philipp force-pushed the disable-test_create_multiple_exports branch from 8fdd3e7 to 2693b48 Compare June 18, 2020 10:15
@sebastian-philipp
Copy link
Contributor Author

getting this as well:

➜  ceph git:(disable-test_create_multiple_exports) ✗ podman pull quay.ceph.io/ceph-ci/ceph:289329252c2ae943de01ac9f80a7a12c6964057e
Trying to pull quay.ceph.io/ceph-ci/ceph:289329252c2ae943de01ac9f80a7a12c6964057e...
  Invalid status code returned when fetching blob 502 (Bad Gateway)
Error: error pulling image "quay.ceph.io/ceph-ci/ceph:289329252c2ae943de01ac9f80a7a12c6964057e": unable to pull quay.ceph.io/ceph-ci/ceph:289329252c2ae943de01ac9f80a7a12c6964057e: unable to pull image: Error parsing image configuration: Invalid status code returned when fetching blob 502 (Bad Gateway)

@sebastian-philipp
Copy link
Contributor Author

Copy link
Member

@batrick batrick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you're using this for testing but NAK on this going into master.

I think Varsha uncovered a cephadm bug. We tell cephadm to remove nfs-ganesha:

2020-06-17T18:31:33.855+0000 7fe3506a8700  0 [volumes DEBUG root] _oremote volumes -> cephadm.remove_service(*('nfs.ganesha-test',), **{})
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr ~Gil Destroying new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr dispatch_remote Calling cephadm.remove_service...
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr dispatch_remote Success calling 'remove_service'
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr ~Gil Destroying new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr ~Gil Destroying new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr dispatch_remote Calling orchestrator._select_orchestrator...
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr get_config  key: mgr/orchestrator/orchestrator
2020-06-17T18:31:33.855+0000 7fe3506a8700 10 mgr get_typed_config get_typed_config orchestrator found: cephadm
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr dispatch_remote Success calling '_select_orchestrator'
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr ~Gil Destroying new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700  0 [volumes DEBUG root] _oremote volumes -> cephadm.process(*([<class 'cephadm.module.CephadmCompletion'>(_s=1, val=NA, _on_c=<function trivial_completion.<locals>.wrapper.<locals>.<lambda> at 0x7fe341f96048>, id=140614041503952, name=<lambda>, pr=NA, _next=None)],), **{})
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr ~Gil Destroying new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr dispatch_remote Calling cephadm.process...
2020-06-17T18:31:33.859+0000 7fe3506a8700  0 [cephadm DEBUG root] process: completions=<CephadmCompletion>[
       <lambda>(...),
]
2020-06-17T18:31:33.859+0000 7fe3506a8700  0 [cephadm INFO root] Remove service nfs.ganesha-test
2020-06-17T18:31:33.859+0000 7fe3506a8700  0 log_channel(cephadm) log [INF] : Remove service nfs.ganesha-test
2020-06-17T18:31:33.859+0000 7fe3506a8700  0 [cephadm DEBUG root] Looking for OSDSpec with service_name: nfs.ganesha-test

and then

020-06-17T18:31:37.435+0000 7fe3430a9700  0 [cephadm INFO root] Removing orphan daemon nfs.ganesha-test.smithi095...
2020-06-17T18:31:37.435+0000 7fe3430a9700  0 log_channel(cephadm) log [INF] : Removing orphan daemon nfs.ganesha-test.smithi095...
2020-06-17T18:31:37.435+0000 7fe3430a9700  0 [cephadm INFO root] Removing daemon nfs.ganesha-test.smithi095 from smithi095
2020-06-17T18:31:37.435+0000 7fe3430a9700  0 log_channel(cephadm) log [INF] : Removing daemon nfs.ganesha-test.smithi095 from smithi095

and then

2020-06-17T18:31:42.159+0000 7fe350ea9700  1 -- [v2:172.21.15.95:6800/1688986166,v1:172.21.15.95:6801/1688986166] <== client.14313 172.21.15.95:0/3980712865 1 ==== mgr_command(tid 0: {"prefix": "orch ls", "service_type": "nfs", "target": ["mon-mgr", ""]}) v1 ==== 95+0+0 (secure 0 0 0) 0x55d35d4811e0 con 0x55d35d296400

The orch ls command returns the nfs.ganesha-test service even though it's been deleted.

From: /ceph/teuthology-archive/varsha-2020-06-17_18:09:32-rados-wip-varsha-testing-nfs-distro-basic-smithi/5158445/remote/ubuntu@smithi095.front.sepia.ceph.com/log/3142d272-b0c8-11ea-a06d-001a4aab830c/ceph-mgr.a.log.gz

@varshar16 To get around this, I think you should change the test to poll for removal. Maybe that will help.

@mgfritch
Copy link
Contributor

The orch ls command returns the nfs.ganesha-test service even though it's been deleted.

From: /ceph/teuthology-archive/varsha-2020-06-17_18:09:32-rados-wip-varsha-testing-nfs-distro-basic-smithi/5158445/remote/ubuntu@smithi095.front.sepia.ceph.com/log/3142d272-b0c8-11ea-a06d-001a4aab830c/ceph-mgr.a.log.gz

The service is unmanaged with a count 1/0, which means that the ganesha daemon is still in the process of doing a graceful shutdown -

2020-06-17T18:31:42.162 INFO:teuthology.orchestra.run.smithi095.stdout:NAME              RUNNING  REFRESHED  AGE  PLACEMENT    IMAGE NAME                                                          IMAGE ID
2020-06-17T18:31:42.162 INFO:teuthology.orchestra.run.smithi095.stdout:nfs.ganesha-test      1/0  4s ago     -    <unmanaged>  quay.ceph.io/ceph-ci/ceph:289329252c2ae943de01ac9f80a7a12c6964057e  956bacc1189b

When the container has finally stopped, the service will no longer appear during an orch ls.

@varshar16 To get around this, I think you should change the test to poll for removal. Maybe that will help.

+1 for polling orch ls until the service has been removed. In my testing it can take as long as 30secs for the nfs container to completely stop.

@sebastian-philipp
Copy link
Contributor Author

I assume you're using this for testing but NAK on this going into master.

👍 That's why I marked it as DNM and omitted the DCO to prevent an accidental merge.

@varshar16
Copy link
Contributor

The orch ls command returns the nfs.ganesha-test service even though it's been deleted.
From: /ceph/teuthology-archive/varsha-2020-06-17_18:09:32-rados-wip-varsha-testing-nfs-distro-basic-smithi/5158445/remote/ubuntu@smithi095.front.sepia.ceph.com/log/3142d272-b0c8-11ea-a06d-001a4aab830c/ceph-mgr.a.log.gz

The service is unmanaged with a count 1/0, which means that the ganesha daemon is still in the process of doing a graceful shutdown -

2020-06-17T18:31:42.162 INFO:teuthology.orchestra.run.smithi095.stdout:NAME              RUNNING  REFRESHED  AGE  PLACEMENT    IMAGE NAME                                                          IMAGE ID
2020-06-17T18:31:42.162 INFO:teuthology.orchestra.run.smithi095.stdout:nfs.ganesha-test      1/0  4s ago     -    <unmanaged>  quay.ceph.io/ceph-ci/ceph:289329252c2ae943de01ac9f80a7a12c6964057e  956bacc1189b

When the container has finally stopped, the service will no longer appear during an orch ls.

@varshar16 To get around this, I think you should change the test to poll for removal. Maybe that will help.

+1 for polling orch ls until the service has been removed. In my testing it can take as long as 30secs for the nfs container to completely stop.

I was waiting for 8 seconds before checking the status and it worked in previous test runs. @mgfritch any idea what is causing it to take longer time here?

@varshar16
Copy link
Contributor

I assume you're using this for testing but NAK on this going into master.

I think Varsha uncovered a cephadm bug. We tell cephadm to remove nfs-ganesha:

2020-06-17T18:31:33.855+0000 7fe3506a8700  0 [volumes DEBUG root] _oremote volumes -> cephadm.remove_service(*('nfs.ganesha-test',), **{})
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr ~Gil Destroying new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr dispatch_remote Calling cephadm.remove_service...
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr dispatch_remote Success calling 'remove_service'
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr ~Gil Destroying new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr ~Gil Destroying new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr dispatch_remote Calling orchestrator._select_orchestrator...
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr get_config  key: mgr/orchestrator/orchestrator
2020-06-17T18:31:33.855+0000 7fe3506a8700 10 mgr get_typed_config get_typed_config orchestrator found: cephadm
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr dispatch_remote Success calling '_select_orchestrator'
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr ~Gil Destroying new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700  0 [volumes DEBUG root] _oremote volumes -> cephadm.process(*([<class 'cephadm.module.CephadmCompletion'>(_s=1, val=NA, _on_c=<function trivial_completion.<locals>.wrapper.<locals>.<lambda> at 0x7fe341f96048>, id=140614041503952, name=<lambda>, pr=NA, _next=None)],), **{})
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr ~Gil Destroying new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr Gil Switched to new thread state 0x55d35d5e3b00
2020-06-17T18:31:33.855+0000 7fe3506a8700 20 mgr dispatch_remote Calling cephadm.process...
2020-06-17T18:31:33.859+0000 7fe3506a8700  0 [cephadm DEBUG root] process: completions=<CephadmCompletion>[
       <lambda>(...),
]
2020-06-17T18:31:33.859+0000 7fe3506a8700  0 [cephadm INFO root] Remove service nfs.ganesha-test
2020-06-17T18:31:33.859+0000 7fe3506a8700  0 log_channel(cephadm) log [INF] : Remove service nfs.ganesha-test
2020-06-17T18:31:33.859+0000 7fe3506a8700  0 [cephadm DEBUG root] Looking for OSDSpec with service_name: nfs.ganesha-test

and then

020-06-17T18:31:37.435+0000 7fe3430a9700  0 [cephadm INFO root] Removing orphan daemon nfs.ganesha-test.smithi095...
2020-06-17T18:31:37.435+0000 7fe3430a9700  0 log_channel(cephadm) log [INF] : Removing orphan daemon nfs.ganesha-test.smithi095...
2020-06-17T18:31:37.435+0000 7fe3430a9700  0 [cephadm INFO root] Removing daemon nfs.ganesha-test.smithi095 from smithi095
2020-06-17T18:31:37.435+0000 7fe3430a9700  0 log_channel(cephadm) log [INF] : Removing daemon nfs.ganesha-test.smithi095 from smithi095

and then

2020-06-17T18:31:42.159+0000 7fe350ea9700  1 -- [v2:172.21.15.95:6800/1688986166,v1:172.21.15.95:6801/1688986166] <== client.14313 172.21.15.95:0/3980712865 1 ==== mgr_command(tid 0: {"prefix": "orch ls", "service_type": "nfs", "target": ["mon-mgr", ""]}) v1 ==== 95+0+0 (secure 0 0 0) 0x55d35d4811e0 con 0x55d35d296400

The orch ls command returns the nfs.ganesha-test service even though it's been deleted.

From: /ceph/teuthology-archive/varsha-2020-06-17_18:09:32-rados-wip-varsha-testing-nfs-distro-basic-smithi/5158445/remote/ubuntu@smithi095.front.sepia.ceph.com/log/3142d272-b0c8-11ea-a06d-001a4aab830c/ceph-mgr.a.log.gz

@varshar16 To get around this, I think you should change the test to poll for removal. Maybe that will help.

I have updated the test in this PR #35646

@sebastian-philipp
Copy link
Contributor Author

looks like skip doesn't work:

2020-06-23T12:25:21.842 INFO:tasks.cephfs_test_runner:Starting test: test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.843 INFO:tasks.cephfs_test_runner:Starting test: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.843 INFO:tasks.cephfs_test_runner:Starting test: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.844 INFO:tasks.cephfs_test_runner:test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS) ... test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS) ... test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS) ... 
2020-06-23T12:25:21.845 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.845 INFO:tasks.cephfs_test_runner:FAIL: test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.845 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.845 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046
2020-06-23T12:25:21.846 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.846 INFO:tasks.cephfs_test_runner:FAIL: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.846 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.847 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046
2020-06-23T12:25:21.847 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.847 INFO:tasks.cephfs_test_runner:FAIL: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.847 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.848 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046
2020-06-23T12:25:21.848 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.848 INFO:tasks.cephfs_test_runner:Ran 3 tests in 13.703s
2020-06-23T12:25:21.849 INFO:tasks.cephfs_test_runner:
2020-06-23T12:25:21.849 INFO:tasks.cephfs_test_runner:FAILED (failures=3)
2020-06-23T12:25:21.849 INFO:tasks.cephfs_test_runner:
2020-06-23T12:25:21.849 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.850 INFO:tasks.cephfs_test_runner:FAIL: test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.850 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.850 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046
2020-06-23T12:25:21.851 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.851 INFO:tasks.cephfs_test_runner:FAIL: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.851 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.851 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046
2020-06-23T12:25:21.852 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.852 INFO:tasks.cephfs_test_runner:FAIL: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.852 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.853 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046

@batrick
Copy link
Member

batrick commented Jun 23, 2020

looks like skip doesn't work:

2020-06-23T12:25:21.842 INFO:tasks.cephfs_test_runner:Starting test: test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.843 INFO:tasks.cephfs_test_runner:Starting test: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.843 INFO:tasks.cephfs_test_runner:Starting test: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.844 INFO:tasks.cephfs_test_runner:test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS) ... test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS) ... test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS) ... 
2020-06-23T12:25:21.845 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.845 INFO:tasks.cephfs_test_runner:FAIL: test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.845 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.845 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046
2020-06-23T12:25:21.846 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.846 INFO:tasks.cephfs_test_runner:FAIL: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.846 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.847 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046
2020-06-23T12:25:21.847 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.847 INFO:tasks.cephfs_test_runner:FAIL: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.847 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.848 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046
2020-06-23T12:25:21.848 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.848 INFO:tasks.cephfs_test_runner:Ran 3 tests in 13.703s
2020-06-23T12:25:21.849 INFO:tasks.cephfs_test_runner:
2020-06-23T12:25:21.849 INFO:tasks.cephfs_test_runner:FAILED (failures=3)
2020-06-23T12:25:21.849 INFO:tasks.cephfs_test_runner:
2020-06-23T12:25:21.849 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.850 INFO:tasks.cephfs_test_runner:FAIL: test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.850 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.850 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046
2020-06-23T12:25:21.851 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.851 INFO:tasks.cephfs_test_runner:FAIL: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.851 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.851 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046
2020-06-23T12:25:21.852 INFO:tasks.cephfs_test_runner:======================================================================
2020-06-23T12:25:21.852 INFO:tasks.cephfs_test_runner:FAIL: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
2020-06-23T12:25:21.852 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-06-23T12:25:21.853 INFO:tasks.cephfs_test_runner:https://tracker.ceph.com/issues/46046

You also need to enable skipping in your yaml file:

fail_on_skip: false

see also qa/suites/fs/basic_functional/tasks/acls-fuse-client.yaml.

@sebastian-philipp sebastian-philipp force-pushed the disable-test_create_multiple_exports branch from 2693b48 to 2b47de5 Compare June 24, 2020 07:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants