qa/cephadm: Add iSCSI#35543
Conversation
|
close: |
e416bf6 to
87ac707
Compare
| ( | ||
| # fixed daemon id for teuthology. | ||
| IscsiServiceSpec( | ||
| service_type='iscsi', | ||
| service_id='iscsi', | ||
| ), | ||
| DaemonDescription( | ||
| daemon_type='iscsi', | ||
| daemon_id="iscsi.a", | ||
| hostname="host1", | ||
| ), | ||
| True | ||
| ), |
| # for cephadm, `service_id` is fixed to "iscsi" right now. | ||
| if self.service_id != 'iscsi': | ||
| raise ServiceSpecValidationError( | ||
| f'service_id must be `iscsi`, not `{self.service_id or "<unset>"}`') | ||
|
|
There was a problem hiding this comment.
|
we're getting there: @matthewoliver : |
Needs #35141 ? |
|
Oh it's doing a relabel on a virtual filesystem, that usually lives under /sys/kernel. what env is this? centos or something running selinux? I assume that's why a relabel is happening. We may need to add a rule to src/selinux/. Let my pull up my selinux dev env again. |
|
no matter how I attempt to restorecon in an attempt to relabel can I get the same error. It says it's coming from bash. Where can I find the script to see how they are relabeling? |
|
maybe the relabel is a redherring, why is service failing and container not found? |
|
here is the log of the failed run: http://pulpito.ceph.com/swagner-2020-06-17_12:51:26-rados:cephadm-wip-swagner-testing-2020-06-17-1044-distro-basic-smithi/5157644/ maybe you have more luck finding a possible hint? |
|
Just occured to me, this is an ubuntu server. So maybe it's not actually selinux getting in the way but apparmor? Otherwise I'm confused about the "relabel" error message. "Relabel"ing is very selinux centric (I thought, though probably apparmor as well). Is there a way to check and see if the ubuntu servers are in apparmor strcit (or enforce mode) as they say restrict mount access to configfs. Or do I need to go down the apparmor/ubuntu rabbit hole next? :P |
|
Hrmm, so I built up an Ubuntu server (vagrant) and the deployed a cephadm iscsi host, via vstart and it didn't seem to trigger anything in apparmor. I added more policies and still nothing. I'll try and turn more debugging on just in case. However, maybe a vstart cluster doesn't install things in the same locations (which is why I tried deploying via cephadm because it mounts in /var/lib/ceph). To see if there could be an apparmor policy hitting it, I wonder what I'm new to apparmor, let me see if I can get some better logging happening. |
|
let me first run this PR on more than one OS |
(note, the green tests did fail as well, but were wrongly marked as pass) e.g. on CentOS: |
|
Hello, fyi I cherrypicked the commits from this PR and created a branch that is using my ceph-salt task so I can try to reproduce it downstream where it's more convenient since I can trigger a run with |
|
hmm I ran the job downstream with sleep and even though didn't manage to reproduce the issue, the iscsi service container seems to be "flapping". |
|
relates to https://tracker.ceph.com/issues/46453 |
51661df to
bc0bd9c
Compare
PLUS |
bc0bd9c to
7e12e3a
Compare
|
smoke is green: https://pulpito.ceph.com/swagner-2020-09-11_10:24:03-rados:cephadm:smoke-wip-swagner3-testing-2020-09-11-1034-distro-basic-smithi/ scheduling a full run now |
Signed-off-by: Sebastian Wagner <sebastian.wagner@suse.com>
Signed-off-by: Sebastian Wagner <sebastian.wagner@suse.com>
Signed-off-by: Sebastian Wagner <sebastian.wagner@suse.com>
7e12e3a to
1bd86d3
Compare
|
rebased |
Signed-off-by: Sebastian Wagner sebastian.wagner@suse.com
Checklist
Show available Jenkins commands
jenkins retest this pleasejenkins test classic perfjenkins test crimson perfjenkins test signedjenkins test make checkjenkins test make check arm64jenkins test submodulesjenkins test dashboardjenkins test dashboard backendjenkins test docsjenkins render docsjenkins test ceph-volume alljenkins test ceph-volume tox