Skip to content

mgr/mds_autoscaler: add autoscaling for mds mgmt#32731

Merged
batrick merged 1 commit intoceph:masterfrom
mchangir:pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system
Aug 10, 2020
Merged

mgr/mds_autoscaler: add autoscaling for mds mgmt#32731
batrick merged 1 commit intoceph:masterfrom
mchangir:pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system

Conversation

@mchangir
Copy link
Contributor

@mchangir mchangir commented Jan 20, 2020

MDS instance management as per changes to 'max_mds' option and mds process liveness.

Fixes: https://tracker.ceph.com/issues/40929
Signed-off-by: Milind Changire mchangir@redhat.com

Checklist

  • References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard backend
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

@mchangir mchangir changed the title <!-- Thank you for opening a pull request! Here are some tips on creating a well formatted contribution. mgr/mds_autoscaler: add autoscaling for mds mgmt Jan 20, 2020
@mchangir mchangir added the wip-mchangir-testing not yet production ready label Jan 20, 2020
@mchangir mchangir requested a review from batrick January 20, 2020 09:28
@mchangir mchangir force-pushed the pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system branch from 5883416 to de06bd6 Compare January 20, 2020 10:25
@batrick batrick added cephfs Ceph File System needs-review labels Jan 20, 2020
Copy link
Contributor

@sebastian-philipp sebastian-philipp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add this module to the ../tox.ini under mypy

def is_fewer_than_max_mds(self, fs):
if len(fs['in']) < fs['max_mds']:
return True
return False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not that simple. You need to review:

https://github.com/ceph/ceph/blob/master/src/mds/MDSMap.cc#L1025-L1033

and

bool MDSMonitor::maybe_promote_standby(FSMap &fsmap, Filesystem& fs)

In general, you only really need to ensure that sufficient standbys exist for a file system. The monitors will use standbys to stabilize the file system if possible. You will see an "INSUFFICIENT_STANDBY" warning generated by the monitors if there are not enough standbys for a file system. See also:

ceph/src/mds/MDSMap.h

Lines 242 to 250 in 0573918

mds_rank_t get_standby_count_wanted(mds_rank_t standby_daemon_count) const {
ceph_assert(standby_daemon_count >= 0);
std::set<mds_rank_t> s;
get_standby_replay_mds_set(s);
mds_rank_t standbys_avail = (mds_rank_t)s.size()+standby_daemon_count;
mds_rank_t wanted = std::max(0, standby_count_wanted);
return wanted > standbys_avail ? wanted - standbys_avail : 0;
}
void set_standby_count_wanted(mds_rank_t n) { standby_count_wanted = n; }

This module should be able to monitor for that via the below notify method for the "clog" events. However, this module should just replicate the logic that detects insufficient standbys and not try to parse the clog event message.

@mchangir mchangir force-pushed the pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system branch 2 times, most recently from 5dc4000 to 6b2812b Compare January 22, 2020 14:11
case CEPH_MSG_FS_MAP:
py_module_registry->notify_all("fs_map", "");
handle_fs_map(ref_cast<MFSMap>(m));
py_module_registry->notify_all("fs_map", "");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be in a separate commit with explanation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Looks like you figured out the problem with point 1. in the mail you sent.)

I would also send this fix as a separate PR so it can get in sooner.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@batrick I thought I had figured out the problem, but doesn't look like so. The fs_map object is still undefined when the clog notification type is handled by the plugin.
Or maybe I turned too many knobs at once while changing the handling of notifications fromfs_map to clog. The plugin probably needs to handle the fs_map notification to save the fs_map and then act on it when the clog notification is delivered.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@batrick This was the missing scope specifier (self) prefix to the fs_map member that was biting me.

@mchangir mchangir force-pushed the pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system branch from 6b2812b to a23701a Compare January 23, 2020 09:09
time.sleep(1)

def is_insufficient_standby(self, message):
return (message == 'Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is such a comparison the only way to figure out if there are insufficient standbys? wouldn't get_current_standby_count() < get_total_standby_count_required() be a good enough?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vshankar you're right

if isinstance(notify_id, dict):
message = notify_id.get('message')
if self.is_insufficient_standby(message) and self.fs_map:
self.filesystems = self.fs_map['filesystems']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it really required to maintain a copy of fsmap, filesystems and standbys?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vshankar did away with the copies now ... will update PR later

src/mgr/Mgr.cc Outdated
{}, &c->outbl, &c->outs, c);
}
}
fs_map_cond.notify_all();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this has been moved from the start of the routine to the end -- what does it fix?

Copy link
Contributor Author

@mchangir mchangir Jan 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vshankar I had thought of moving the notification to the last since my patch wasn't working; but my python code wasn't what it was supposed to be; I'll be reverting back these changes

@mchangir mchangir force-pushed the pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system branch 2 times, most recently from 9ed834d to 5bccc44 Compare February 3, 2020 13:27
@mchangir mchangir force-pushed the pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system branch 3 times, most recently from 5f6de2f to d90f87e Compare March 9, 2020 06:15
@sebastian-philipp sebastian-philipp requested a review from a team March 9, 2020 09:12
if sb['state'] == 'up:active' and sb['rank'] == -1:
name = sb['name']
self.remove_mds(sb['name'])
break
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this logic should be moved to cephadm so that it, in general, kills standby (mds,mgr) daemons when scaling down.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liewegas I presumed that cephadm was only responsible for implementing the mechanism of the operations and the policy decisions were supposed to be taken by cephadm clients.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cephadm has a kubernetes-like spec (that says, e.g., "3 daemons") and controller that ensures that the right number of daemons are running, removing or adding individual daemons will tend to fight with that: if you remove one, a new one will get created (somewhere), and if you add one, an existing one will get removed. Alternatively, if you adjust the count down, cephadm will pick which one to remove.

I think the best path forward is to make cephadm (and rook) just a bit smarter so that they prefer to remove standby daemons and not active ones. Then this tool just needs to adjust the total daemon count (max_mds + num standbys)...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a half-written patch to make cephadm prefer to remove standbys... so I suggest simplifying this PR to just adjust the count.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liewegas "Then this tool just needs to adjust the total daemon count (max_mds + num standbys)" ... how do I do that ?

Copy link
Member

@liewegas liewegas Mar 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the most common case, it's as simple as ceph orch apply mds $fsname $count (apply_mds(...)). But the user might have a customized placement (specifying a label, hosts, etc.). So I think right thing to do is fetch the current placement (you can do this with ceph orch ls --service-name mds.$fsname (describe_service(service_name='mds.whatever')), modify just the count field in spec.placement PlacementSpec (make a copy first--don't modify in place), and then send that back to apply_mds().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liewegas all that the describe_service() returns back is one record about the crash service
I've posted a WIP PR; please take a look; I must be missing something that I can't fathom yet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sebastian-philipp I'm trying to test my plugin on a vstart cluster on my laptop. I can't find anyway to make cephadm act on the default cephfs.a cluster. Looks like cephadm should start the daemons itself to gain ownership of the cluster to effectively act on them.
Is there a way to instruct cephadm to start a file-system and take ownership of all relevant daemons ?
The MDS Autoscaler Plugin is supposed to act on changes to the FSMap and spawn or kill MDS standby daemons as required. Could you suggest me a workflow to achieve this ? I've used Sage's suggestions to update my code but the describe_service() API isn't listing anything than the crash service. Same goes with ceph orch ls command at the command-line as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mchangir so, cephadm won't touch any daemons that are not created by cephadm, like e.g. daemons created by vstart.sh. you'll have to create them using ceph orch apply mds... and not via vstart.

is there a way to instruct cephadm to start a file-system and take ownership of all relevant daemons ?

yes: ceph fs volume create --placement=....

the describe_service() API isn't listing anything than the crash service

exactly. you'll have to create the daemons using cephadm first.

@mchangir mchangir force-pushed the pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system branch 2 times, most recently from 24217a1 to f7a33a8 Compare March 21, 2020 10:24
@sebastian-philipp
Copy link
Contributor

mypy run-test: commands[0] | mypy --config-file=../../mypy.ini cephadm/module.py mgr_module.py dashboard/module.py mgr_util.py orchestrator/__init__.py progress/module.py rook/module.py osd_support/module.py test_orchestrator/module.py volumes/__init__.py mds_autoscaler/module.py
mds_autoscaler/module.py: note: In member "create_mds_old" of class "MDSAutoscaler":
mds_autoscaler/module.py:30: error: Module has no attribute "ServiceSpec"
mds_autoscaler/module.py: note: In member "delete_mds_old" of class "MDSAutoscaler":
mds_autoscaler/module.py:45: error: Value of type "Optional[Any]" is not indexable
mds_autoscaler/module.py:49: error: "MDSAutoscaler" has no attribute "remove_mds"
mds_autoscaler/module.py: note: In member "get_required_standby_count" of class "MDSAutoscaler":
mds_autoscaler/module.py:99: error: Value of type "Optional[Any]" is not indexable
mds_autoscaler/module.py: note: In member "get_current_standby_count" of class "MDSAutoscaler":
mds_autoscaler/module.py:106: error: Value of type "Optional[Any]" is not indexable
mds_autoscaler/module.py: note: In member "get_fs_name" of class "MDSAutoscaler":
mds_autoscaler/module.py:109: error: Value of type "Optional[Any]" is not indexable
Found 6 errors in 1 file (checked 11 source files)

@mchangir mchangir force-pushed the pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system branch from f7a33a8 to 670e6c3 Compare April 7, 2020 13:34
@mchangir mchangir force-pushed the pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system branch from 098205b to 80f3e44 Compare August 4, 2020 08:25
@batrick batrick changed the base branch from master to octopus August 4, 2020 23:02
@batrick batrick requested a review from a team August 4, 2020 23:02
@batrick batrick requested review from a team as code owners August 4, 2020 23:02
@batrick batrick changed the base branch from octopus to master August 4, 2020 23:02
@batrick batrick removed request for a team August 4, 2020 23:02
Copy link
Member

@batrick batrick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying this out with vstart.sh: env MDS=5 ../src/vstart.sh -n -d --cephadm.

I'm seeing the module do some work (with the patches below suggested) but cephadm isn't spawning any new daemons. Have you tried this with vstart.sh?

assert fs_map is not None
for fsys in fs_map['filesystems']:
if fsys.get('mdsmap').get('fs_name') == fs_name:
return len(fsys.get('mdsmap').get('up'))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you're looking at the daemon names in get_current_standby_count to determine which standbys belong to a file system, you need to do the same here. If the daemon was not spawned by the orchestrator, it should not be counted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment on lines +78 to +80
for fsys in fs_map['filesystems']:
if fsys.get('mdsmap').get('fs_name') == fs_name:
return len(fsys.get('mdsmap').get('up'))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for fsys in fs_map['filesystems']:
if fsys.get('mdsmap').get('fs_name') == fs_name:
return len(fsys.get('mdsmap').get('up'))
for fs in fs_map['filesystems']:
if fs['mdsmap']['fs_name'] == fs_name:
return len(fs['mdsmap']['up'])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

assert fs_map is not None
for fs in fs_map['filesystems']:
if fs['mdsmap']['fs_name'] == fs_name:
return fs['mdsmap'].get('standby_count_wanted')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return fs['mdsmap'].get('standby_count_wanted')
return fs['mdsmap']['standby_count_wanted']

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment on lines +85 to +87
for fsys in fs_map['filesystems']:
if fsys.get('mdsmap').get('fs_name') == fs_name:
return fsys.get('mdsmap', {}).get('max_mds')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for fsys in fs_map['filesystems']:
if fsys.get('mdsmap').get('fs_name') == fs_name:
return fsys.get('mdsmap', {}).get('max_mds')
for fs in fs_map['filesystems']:
if fs['mdsmap']['fs_name'] == fs_name:
return fs['mdsmap']['max_mds']

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment on lines +125 to +126
except (orchestrator.OrchestratorError, AssertionError) as e:
self.log.debug(f"fs:{fs_name} exception while verifying mds status: {e!r}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
except (orchestrator.OrchestratorError, AssertionError) as e:
self.log.debug(f"fs:{fs_name} exception while verifying mds status: {e!r}")
except orchestrator.OrchestratorError as e:
self.log.exception(f"fs:{fs_name} exception while verifying mds status: {e}")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't catch AssertionError, we want the module to blow up if that hits.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

MDS autoscaler.
"""
def __init__(self, *args, **kwargs):
super(MDSAutoscaler, self).__init__(*args, **kwargs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
super(MDSAutoscaler, self).__init__(*args, **kwargs)
MgrModule.__init__(*args, **kwargs)

With multiple inheritance, super is ambiguous. OrchestratorClientMixin has no __init__ method, too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@mchangir
Copy link
Contributor Author

mchangir commented Aug 5, 2020

I'm trying this out with vstart.sh: env MDS=5 ../src/vstart.sh -n -d --cephadm.

I'm seeing the module do some work (with the patches below suggested) but cephadm isn't spawning any new daemons. Have you tried this with vstart.sh?

According to @sebastian-philipp the cephadm will not spawn more than one MDS per host for a filesystem.
So, I typically have virtual machines dedicated for the file-system while running the vstart cluster.
I run 3 VMs to do minimal testing for the test file-system and add the VMs to the orchestrator host list so that cephadm can deploy the MDSs on them.
Also, it does take a while for the container image download and deployment to the VMs.

@sebastian-philipp
Copy link
Contributor

I'm trying this out with vstart.sh: env MDS=5 ../src/vstart.sh -n -d --cephadm.

That's not going to work: MDSs deployed by vstart itself aren't managed by cephadm. You're going to need a few VMs at add them as hosts to cephadm. Then you can deploy MDSs on them.

So, something like this should work:

vstart.sh -n --cephadm
ceph orch host add vm-1
ceph orch host add vm-2
ceph orch host add vm-3
ceph orch apply mds count:2

@batrick
Copy link
Member

batrick commented Aug 5, 2020

I'm trying this out with vstart.sh: env MDS=5 ../src/vstart.sh -n -d --cephadm.
I'm seeing the module do some work (with the patches below suggested) but cephadm isn't spawning any new daemons. Have you tried this with vstart.sh?

According to @sebastian-philipp the cephadm will not spawn more than one MDS per host for a filesystem.

Thanks for reminding me. However, I didn't see cephadm even spawn one daemon on the host. Perhaps it knows that there are MDS it does not manage already on that host (localhost)?

So, I typically have virtual machines dedicated for the file-system while running the vstart cluster.
I run 3 VMs to do minimal testing for the test file-system and add the VMs to the orchestrator host list so that cephadm can deploy the MDSs on them.
Also, it does take a while for the container image download and deployment to the VMs.

Ok. Let's get the patches I've suggested above in then we can merge if @sebastian-philipp is satisfied.

@mchangir mchangir force-pushed the pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system branch from 80f3e44 to b93a95c Compare August 6, 2020 03:18
@mchangir
Copy link
Contributor Author

mchangir commented Aug 6, 2020

@sebastian-philipp please review

@batrick
Copy link
Member

batrick commented Aug 6, 2020

Good first milestone. Just waiting on @sebastian-philipp for his final take.

Copy link
Contributor

@sebastian-philipp sebastian-philipp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@sebastian-philipp
Copy link
Contributor

do we have a QA run validating this PR?

@mchangir
Copy link
Contributor Author

mchangir commented Aug 6, 2020

do we have a QA run validating this PR?

nope; its been all manual testing so far;
I'll need more time to code up teuthology test cases;

@sebastian-philipp sebastian-philipp added the wip-swagner-testing My Teuthology tests label Aug 6, 2020
@sebastian-philipp
Copy link
Contributor

Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.0.0-4227-gc1ebc86a741/rpm/el8/BUILDROOT/ceph-16.0.0-4227.gc1ebc86a741.el8.x86_64
error: Installed (but unpackaged) file(s) found:
   /usr/share/ceph/mgr/mds_autoscaler/__init__.py
   /usr/share/ceph/mgr/mds_autoscaler/module.py


RPM build errors:
    Installed (but unpackaged) file(s) found:
   /usr/share/ceph/mgr/mds_autoscaler/__init__.py
   /usr/share/ceph/mgr/mds_autoscaler/module.py
+ rm -fr /tmp/install-deps.3816209
Build step 'Execute shell' marked build as failure

https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos8,DIST=centos8,MACHINE_SIZE=gigantic/44262//consoleFull

@sebastian-philipp sebastian-philipp removed the wip-swagner-testing My Teuthology tests label Aug 7, 2020
mgr plugin to deploy and configure MDSs in response to degraded file system

MDS instance management as per changes to:
* 'max_mds' option
* 'standby_count_wanted' option
* mds liveness and transitions from standby to active

mds_autoscaler plugin test credit goes to Sebastian Wagner.

Fixes: https://tracker.ceph.com/issues/40929
Signed-off-by: Milind Changire <mchangir@redhat.com>
@mchangir mchangir force-pushed the pybind/mgr/mds_autoscaler-mgr-plugin-to-deploy-and-configure-MDS-on-degraded-fle-system branch from b93a95c to f69abe6 Compare August 7, 2020 11:39
@mchangir
Copy link
Contributor Author

mchangir commented Aug 7, 2020

Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.0.0-4227-gc1ebc86a741/rpm/el8/BUILDROOT/ceph-16.0.0-4227.gc1ebc86a741.el8.x86_64
error: Installed (but unpackaged) file(s) found:
   /usr/share/ceph/mgr/mds_autoscaler/__init__.py
   /usr/share/ceph/mgr/mds_autoscaler/module.py


RPM build errors:
    Installed (but unpackaged) file(s) found:
   /usr/share/ceph/mgr/mds_autoscaler/__init__.py
   /usr/share/ceph/mgr/mds_autoscaler/module.py
+ rm -fr /tmp/install-deps.3816209
Build step 'Execute shell' marked build as failure

https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos8,DIST=centos8,MACHINE_SIZE=gigantic/44262//consoleFull

I've added the mds_autoscaler directory to ceph.spec.in.
Please check and let me know if anything else needs fixing.

@batrick batrick added needs-qa wip-pdonnell-testing and removed wip-mchangir-testing not yet production ready labels Aug 8, 2020
@batrick
Copy link
Member

batrick commented Aug 10, 2020

Follow-up work:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants