[core] (cgroups 11/n) Raylet will move system processes into cgroup on startup#56522
Merged
[core] (cgroups 11/n) Raylet will move system processes into cgroup on startup#56522
Conversation
to perform cgroup operations. Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
instead of clone for older kernel headers < 5.7 (which is what we have in CI) Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: Ibrahim Rabbani <israbbani@gmail.com>
…irabbani/cgroups-1
Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: irabbani <irabbani@anyscale.com>
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: Ibrahim Rabbani <israbbani@gmail.com>
Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: Ibrahim Rabbani <israbbani@gmail.com>
…irabbani/cgroups-1
Signed-off-by: irabbani <irabbani@anyscale.com>
fix CI. Signed-off-by: irabbani <irabbani@anyscale.com>
Signed-off-by: Ibrahim Rabbani <irabbani@anyscale.com>
Contributor
Author
|
I've kicked off MacOS and Windows tests to be extra super duper ultra sure that post-merge won't break. Lets wait for them to pass and I'll ping for merge. |
Contributor
Author
|
CI post-merge is in bad shape. All failures are unrelated:
@edoakes this should be good to merge. |
Collaborator
|
CI is a little too red for me to be comfortable merging this, don't want to get in the habit of force merging. Let's hold off until test issues are resolved. |
Contributor
Author
Collaborator
|
Test failure: https://buildkite.com/ray-project/premerge/builds/49657#019977d7-530a-4b78-8296-3e948219c6c0/179-5349 Don't see on tracker -- relevant? |
ZacAttack
pushed a commit
to ZacAttack/ray
that referenced
this pull request
Sep 24, 2025
…n startup (ray-project#56522) This PR stacks on ray-project#56352 . For more details about the resource isolation project see ray-project#54703. This PR the makes the raylet move the system processes into the system cgroup on startup if resource isolation is enabled. It introduces the following * A new raylet cli arg `--system-pids` which is a comma-separated string of pids of system processes that are started before the raylet. As of today, it contains * On the head node: gcs_server, dashboard_api_server, ray client server, monitor (autoscaler) * On every node (including head): process subreaper, log monitor. * End-to-end integration tests for resource isolation with the Ray SDK (`ray.init`) and the Ray CLI (`ray --start`) There are a few rough edges (I've added a comment on the PR where relevant): 1. The construction of ResourceIsolationConfig is spread across multiple call-sites (create the object, add the object store memory, add the system pids). The big positive of doing it this way was to fail fast on invalid user input (in scripts.py and worker.py). I think it needs to have at least two components: the user input (cgroup_path, system_reserved_memory, ...) and the derived input (system_pids, total_system_reserved_memory). 2. How to determine which processes should be moved? Right now I'm using `self.all_processes` in `node.py`. It _should_ contain all processes started so far, but there's no guarantee. 3. How intrusive should the integration test be? Should we count the number of pids inside the system cgroup? (This was answered in ray-project#56549) 4. How should a user setup multiple nodes on the same VM? I haven't written an integration test for it yet because there are multiple options for how to set this up. --------- Signed-off-by: irabbani <israbbani@gmail.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: zac <zac@anyscale.com>
elliot-barn
pushed a commit
that referenced
this pull request
Sep 24, 2025
…n startup (#56522) This PR stacks on #56352 . For more details about the resource isolation project see #54703. This PR the makes the raylet move the system processes into the system cgroup on startup if resource isolation is enabled. It introduces the following * A new raylet cli arg `--system-pids` which is a comma-separated string of pids of system processes that are started before the raylet. As of today, it contains * On the head node: gcs_server, dashboard_api_server, ray client server, monitor (autoscaler) * On every node (including head): process subreaper, log monitor. * End-to-end integration tests for resource isolation with the Ray SDK (`ray.init`) and the Ray CLI (`ray --start`) There are a few rough edges (I've added a comment on the PR where relevant): 1. The construction of ResourceIsolationConfig is spread across multiple call-sites (create the object, add the object store memory, add the system pids). The big positive of doing it this way was to fail fast on invalid user input (in scripts.py and worker.py). I think it needs to have at least two components: the user input (cgroup_path, system_reserved_memory, ...) and the derived input (system_pids, total_system_reserved_memory). 2. How to determine which processes should be moved? Right now I'm using `self.all_processes` in `node.py`. It _should_ contain all processes started so far, but there's no guarantee. 3. How intrusive should the integration test be? Should we count the number of pids inside the system cgroup? (This was answered in #56549) 4. How should a user setup multiple nodes on the same VM? I haven't written an integration test for it yet because there are multiple options for how to set this up. --------- Signed-off-by: irabbani <israbbani@gmail.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: elliot-barn <elliot.barnwell@anyscale.com>
edoakes
added a commit
that referenced
this pull request
Sep 24, 2025
…cation cgroup (#56549) This PR stacks on #56522 . For more details about the resource isolation project see #54703. This PR the makes the raylet move runtime_env and dashboard agents into the system cgroup. Workers are now spawned inside the application cgroup. It introduces the following: * I've added a new target `raylet_cgroup_types` which defines the type used all functions that need to add a process to a cgroup. * A new parameter is added to `NodeManager`, `WorkerPool`, `AgentManager`, and `Process` constructors. The parameter is a callback that will use the CgroupManager to add a process to the respective cgroup. * The callback is created in `main.cc`. * `main.cc` owns CgroupManager because it needs to outlive the `WorkerPool`. * `process.c` calls the callback after fork() in the child process so nothing else can happen in the forked process before it's moved into the correct cgroup. * Integration tests in python for end-to-end testing of cgroups with system and application processes moved into their respective cgroups. The tests are inside `python/ray/tests/resource_isolation/test_resource_isolation_integration.py` and have similar setup/teardown to the C++ integration tests introduced in #55063. --------- Signed-off-by: Ibrahim Rabbani <irabbani@anyscale.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
marcostephan
pushed a commit
to marcostephan/ray
that referenced
this pull request
Sep 24, 2025
…n startup (ray-project#56522) This PR stacks on ray-project#56352 . For more details about the resource isolation project see ray-project#54703. This PR the makes the raylet move the system processes into the system cgroup on startup if resource isolation is enabled. It introduces the following * A new raylet cli arg `--system-pids` which is a comma-separated string of pids of system processes that are started before the raylet. As of today, it contains * On the head node: gcs_server, dashboard_api_server, ray client server, monitor (autoscaler) * On every node (including head): process subreaper, log monitor. * End-to-end integration tests for resource isolation with the Ray SDK (`ray.init`) and the Ray CLI (`ray --start`) There are a few rough edges (I've added a comment on the PR where relevant): 1. The construction of ResourceIsolationConfig is spread across multiple call-sites (create the object, add the object store memory, add the system pids). The big positive of doing it this way was to fail fast on invalid user input (in scripts.py and worker.py). I think it needs to have at least two components: the user input (cgroup_path, system_reserved_memory, ...) and the derived input (system_pids, total_system_reserved_memory). 2. How to determine which processes should be moved? Right now I'm using `self.all_processes` in `node.py`. It _should_ contain all processes started so far, but there's no guarantee. 3. How intrusive should the integration test be? Should we count the number of pids inside the system cgroup? (This was answered in ray-project#56549) 4. How should a user setup multiple nodes on the same VM? I haven't written an integration test for it yet because there are multiple options for how to set this up. --------- Signed-off-by: irabbani <israbbani@gmail.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: Marco Stephan <marco@magic.dev>
marcostephan
pushed a commit
to marcostephan/ray
that referenced
this pull request
Sep 24, 2025
…cation cgroup (ray-project#56549) This PR stacks on ray-project#56522 . For more details about the resource isolation project see ray-project#54703. This PR the makes the raylet move runtime_env and dashboard agents into the system cgroup. Workers are now spawned inside the application cgroup. It introduces the following: * I've added a new target `raylet_cgroup_types` which defines the type used all functions that need to add a process to a cgroup. * A new parameter is added to `NodeManager`, `WorkerPool`, `AgentManager`, and `Process` constructors. The parameter is a callback that will use the CgroupManager to add a process to the respective cgroup. * The callback is created in `main.cc`. * `main.cc` owns CgroupManager because it needs to outlive the `WorkerPool`. * `process.c` calls the callback after fork() in the child process so nothing else can happen in the forked process before it's moved into the correct cgroup. * Integration tests in python for end-to-end testing of cgroups with system and application processes moved into their respective cgroups. The tests are inside `python/ray/tests/resource_isolation/test_resource_isolation_integration.py` and have similar setup/teardown to the C++ integration tests introduced in ray-project#55063. --------- Signed-off-by: Ibrahim Rabbani <irabbani@anyscale.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: Marco Stephan <marco@magic.dev>
elliot-barn
pushed a commit
that referenced
this pull request
Sep 27, 2025
…n startup (#56522) This PR stacks on #56352 . For more details about the resource isolation project see #54703. This PR the makes the raylet move the system processes into the system cgroup on startup if resource isolation is enabled. It introduces the following * A new raylet cli arg `--system-pids` which is a comma-separated string of pids of system processes that are started before the raylet. As of today, it contains * On the head node: gcs_server, dashboard_api_server, ray client server, monitor (autoscaler) * On every node (including head): process subreaper, log monitor. * End-to-end integration tests for resource isolation with the Ray SDK (`ray.init`) and the Ray CLI (`ray --start`) There are a few rough edges (I've added a comment on the PR where relevant): 1. The construction of ResourceIsolationConfig is spread across multiple call-sites (create the object, add the object store memory, add the system pids). The big positive of doing it this way was to fail fast on invalid user input (in scripts.py and worker.py). I think it needs to have at least two components: the user input (cgroup_path, system_reserved_memory, ...) and the derived input (system_pids, total_system_reserved_memory). 2. How to determine which processes should be moved? Right now I'm using `self.all_processes` in `node.py`. It _should_ contain all processes started so far, but there's no guarantee. 3. How intrusive should the integration test be? Should we count the number of pids inside the system cgroup? (This was answered in #56549) 4. How should a user setup multiple nodes on the same VM? I haven't written an integration test for it yet because there are multiple options for how to set this up. --------- Signed-off-by: irabbani <israbbani@gmail.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: elliot-barn <elliot.barnwell@anyscale.com>
elliot-barn
pushed a commit
that referenced
this pull request
Sep 27, 2025
…cation cgroup (#56549) This PR stacks on #56522 . For more details about the resource isolation project see #54703. This PR the makes the raylet move runtime_env and dashboard agents into the system cgroup. Workers are now spawned inside the application cgroup. It introduces the following: * I've added a new target `raylet_cgroup_types` which defines the type used all functions that need to add a process to a cgroup. * A new parameter is added to `NodeManager`, `WorkerPool`, `AgentManager`, and `Process` constructors. The parameter is a callback that will use the CgroupManager to add a process to the respective cgroup. * The callback is created in `main.cc`. * `main.cc` owns CgroupManager because it needs to outlive the `WorkerPool`. * `process.c` calls the callback after fork() in the child process so nothing else can happen in the forked process before it's moved into the correct cgroup. * Integration tests in python for end-to-end testing of cgroups with system and application processes moved into their respective cgroups. The tests are inside `python/ray/tests/resource_isolation/test_resource_isolation_integration.py` and have similar setup/teardown to the C++ integration tests introduced in #55063. --------- Signed-off-by: Ibrahim Rabbani <irabbani@anyscale.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: elliot-barn <elliot.barnwell@anyscale.com>
dstrodtman
pushed a commit
that referenced
this pull request
Oct 6, 2025
…n startup (#56522) This PR stacks on #56352 . For more details about the resource isolation project see #54703. This PR the makes the raylet move the system processes into the system cgroup on startup if resource isolation is enabled. It introduces the following * A new raylet cli arg `--system-pids` which is a comma-separated string of pids of system processes that are started before the raylet. As of today, it contains * On the head node: gcs_server, dashboard_api_server, ray client server, monitor (autoscaler) * On every node (including head): process subreaper, log monitor. * End-to-end integration tests for resource isolation with the Ray SDK (`ray.init`) and the Ray CLI (`ray --start`) There are a few rough edges (I've added a comment on the PR where relevant): 1. The construction of ResourceIsolationConfig is spread across multiple call-sites (create the object, add the object store memory, add the system pids). The big positive of doing it this way was to fail fast on invalid user input (in scripts.py and worker.py). I think it needs to have at least two components: the user input (cgroup_path, system_reserved_memory, ...) and the derived input (system_pids, total_system_reserved_memory). 2. How to determine which processes should be moved? Right now I'm using `self.all_processes` in `node.py`. It _should_ contain all processes started so far, but there's no guarantee. 3. How intrusive should the integration test be? Should we count the number of pids inside the system cgroup? (This was answered in #56549) 4. How should a user setup multiple nodes on the same VM? I haven't written an integration test for it yet because there are multiple options for how to set this up. --------- Signed-off-by: irabbani <israbbani@gmail.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: Douglas Strodtman <douglas@anyscale.com>
dstrodtman
pushed a commit
to dstrodtman/ray
that referenced
this pull request
Oct 6, 2025
…cation cgroup (ray-project#56549) This PR stacks on ray-project#56522 . For more details about the resource isolation project see ray-project#54703. This PR the makes the raylet move runtime_env and dashboard agents into the system cgroup. Workers are now spawned inside the application cgroup. It introduces the following: * I've added a new target `raylet_cgroup_types` which defines the type used all functions that need to add a process to a cgroup. * A new parameter is added to `NodeManager`, `WorkerPool`, `AgentManager`, and `Process` constructors. The parameter is a callback that will use the CgroupManager to add a process to the respective cgroup. * The callback is created in `main.cc`. * `main.cc` owns CgroupManager because it needs to outlive the `WorkerPool`. * `process.c` calls the callback after fork() in the child process so nothing else can happen in the forked process before it's moved into the correct cgroup. * Integration tests in python for end-to-end testing of cgroups with system and application processes moved into their respective cgroups. The tests are inside `python/ray/tests/resource_isolation/test_resource_isolation_integration.py` and have similar setup/teardown to the C++ integration tests introduced in ray-project#55063. --------- Signed-off-by: Ibrahim Rabbani <irabbani@anyscale.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: Douglas Strodtman <douglas@anyscale.com>
justinyeh1995
pushed a commit
to justinyeh1995/ray
that referenced
this pull request
Oct 20, 2025
…n startup (ray-project#56522) This PR stacks on ray-project#56352 . For more details about the resource isolation project see ray-project#54703. This PR the makes the raylet move the system processes into the system cgroup on startup if resource isolation is enabled. It introduces the following * A new raylet cli arg `--system-pids` which is a comma-separated string of pids of system processes that are started before the raylet. As of today, it contains * On the head node: gcs_server, dashboard_api_server, ray client server, monitor (autoscaler) * On every node (including head): process subreaper, log monitor. * End-to-end integration tests for resource isolation with the Ray SDK (`ray.init`) and the Ray CLI (`ray --start`) There are a few rough edges (I've added a comment on the PR where relevant): 1. The construction of ResourceIsolationConfig is spread across multiple call-sites (create the object, add the object store memory, add the system pids). The big positive of doing it this way was to fail fast on invalid user input (in scripts.py and worker.py). I think it needs to have at least two components: the user input (cgroup_path, system_reserved_memory, ...) and the derived input (system_pids, total_system_reserved_memory). 2. How to determine which processes should be moved? Right now I'm using `self.all_processes` in `node.py`. It _should_ contain all processes started so far, but there's no guarantee. 3. How intrusive should the integration test be? Should we count the number of pids inside the system cgroup? (This was answered in ray-project#56549) 4. How should a user setup multiple nodes on the same VM? I haven't written an integration test for it yet because there are multiple options for how to set this up. --------- Signed-off-by: irabbani <israbbani@gmail.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
justinyeh1995
pushed a commit
to justinyeh1995/ray
that referenced
this pull request
Oct 20, 2025
…cation cgroup (ray-project#56549) This PR stacks on ray-project#56522 . For more details about the resource isolation project see ray-project#54703. This PR the makes the raylet move runtime_env and dashboard agents into the system cgroup. Workers are now spawned inside the application cgroup. It introduces the following: * I've added a new target `raylet_cgroup_types` which defines the type used all functions that need to add a process to a cgroup. * A new parameter is added to `NodeManager`, `WorkerPool`, `AgentManager`, and `Process` constructors. The parameter is a callback that will use the CgroupManager to add a process to the respective cgroup. * The callback is created in `main.cc`. * `main.cc` owns CgroupManager because it needs to outlive the `WorkerPool`. * `process.c` calls the callback after fork() in the child process so nothing else can happen in the forked process before it's moved into the correct cgroup. * Integration tests in python for end-to-end testing of cgroups with system and application processes moved into their respective cgroups. The tests are inside `python/ray/tests/resource_isolation/test_resource_isolation_integration.py` and have similar setup/teardown to the C++ integration tests introduced in ray-project#55063. --------- Signed-off-by: Ibrahim Rabbani <irabbani@anyscale.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
landscapepainter
pushed a commit
to landscapepainter/ray
that referenced
this pull request
Nov 17, 2025
…n startup (ray-project#56522) This PR stacks on ray-project#56352 . For more details about the resource isolation project see ray-project#54703. This PR the makes the raylet move the system processes into the system cgroup on startup if resource isolation is enabled. It introduces the following * A new raylet cli arg `--system-pids` which is a comma-separated string of pids of system processes that are started before the raylet. As of today, it contains * On the head node: gcs_server, dashboard_api_server, ray client server, monitor (autoscaler) * On every node (including head): process subreaper, log monitor. * End-to-end integration tests for resource isolation with the Ray SDK (`ray.init`) and the Ray CLI (`ray --start`) There are a few rough edges (I've added a comment on the PR where relevant): 1. The construction of ResourceIsolationConfig is spread across multiple call-sites (create the object, add the object store memory, add the system pids). The big positive of doing it this way was to fail fast on invalid user input (in scripts.py and worker.py). I think it needs to have at least two components: the user input (cgroup_path, system_reserved_memory, ...) and the derived input (system_pids, total_system_reserved_memory). 2. How to determine which processes should be moved? Right now I'm using `self.all_processes` in `node.py`. It _should_ contain all processes started so far, but there's no guarantee. 3. How intrusive should the integration test be? Should we count the number of pids inside the system cgroup? (This was answered in ray-project#56549) 4. How should a user setup multiple nodes on the same VM? I haven't written an integration test for it yet because there are multiple options for how to set this up. --------- Signed-off-by: irabbani <israbbani@gmail.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
landscapepainter
pushed a commit
to landscapepainter/ray
that referenced
this pull request
Nov 17, 2025
…cation cgroup (ray-project#56549) This PR stacks on ray-project#56522 . For more details about the resource isolation project see ray-project#54703. This PR the makes the raylet move runtime_env and dashboard agents into the system cgroup. Workers are now spawned inside the application cgroup. It introduces the following: * I've added a new target `raylet_cgroup_types` which defines the type used all functions that need to add a process to a cgroup. * A new parameter is added to `NodeManager`, `WorkerPool`, `AgentManager`, and `Process` constructors. The parameter is a callback that will use the CgroupManager to add a process to the respective cgroup. * The callback is created in `main.cc`. * `main.cc` owns CgroupManager because it needs to outlive the `WorkerPool`. * `process.c` calls the callback after fork() in the child process so nothing else can happen in the forked process before it's moved into the correct cgroup. * Integration tests in python for end-to-end testing of cgroups with system and application processes moved into their respective cgroups. The tests are inside `python/ray/tests/resource_isolation/test_resource_isolation_integration.py` and have similar setup/teardown to the C++ integration tests introduced in ray-project#55063. --------- Signed-off-by: Ibrahim Rabbani <irabbani@anyscale.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com>
Future-Outlier
pushed a commit
to Future-Outlier/ray
that referenced
this pull request
Dec 7, 2025
…n startup (ray-project#56522) This PR stacks on ray-project#56352 . For more details about the resource isolation project see ray-project#54703. This PR the makes the raylet move the system processes into the system cgroup on startup if resource isolation is enabled. It introduces the following * A new raylet cli arg `--system-pids` which is a comma-separated string of pids of system processes that are started before the raylet. As of today, it contains * On the head node: gcs_server, dashboard_api_server, ray client server, monitor (autoscaler) * On every node (including head): process subreaper, log monitor. * End-to-end integration tests for resource isolation with the Ray SDK (`ray.init`) and the Ray CLI (`ray --start`) There are a few rough edges (I've added a comment on the PR where relevant): 1. The construction of ResourceIsolationConfig is spread across multiple call-sites (create the object, add the object store memory, add the system pids). The big positive of doing it this way was to fail fast on invalid user input (in scripts.py and worker.py). I think it needs to have at least two components: the user input (cgroup_path, system_reserved_memory, ...) and the derived input (system_pids, total_system_reserved_memory). 2. How to determine which processes should be moved? Right now I'm using `self.all_processes` in `node.py`. It _should_ contain all processes started so far, but there's no guarantee. 3. How intrusive should the integration test be? Should we count the number of pids inside the system cgroup? (This was answered in ray-project#56549) 4. How should a user setup multiple nodes on the same VM? I haven't written an integration test for it yet because there are multiple options for how to set this up. --------- Signed-off-by: irabbani <israbbani@gmail.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: Future-Outlier <eric901201@gmail.com>
Future-Outlier
pushed a commit
to Future-Outlier/ray
that referenced
this pull request
Dec 7, 2025
…cation cgroup (ray-project#56549) This PR stacks on ray-project#56522 . For more details about the resource isolation project see ray-project#54703. This PR the makes the raylet move runtime_env and dashboard agents into the system cgroup. Workers are now spawned inside the application cgroup. It introduces the following: * I've added a new target `raylet_cgroup_types` which defines the type used all functions that need to add a process to a cgroup. * A new parameter is added to `NodeManager`, `WorkerPool`, `AgentManager`, and `Process` constructors. The parameter is a callback that will use the CgroupManager to add a process to the respective cgroup. * The callback is created in `main.cc`. * `main.cc` owns CgroupManager because it needs to outlive the `WorkerPool`. * `process.c` calls the callback after fork() in the child process so nothing else can happen in the forked process before it's moved into the correct cgroup. * Integration tests in python for end-to-end testing of cgroups with system and application processes moved into their respective cgroups. The tests are inside `python/ray/tests/resource_isolation/test_resource_isolation_integration.py` and have similar setup/teardown to the C++ integration tests introduced in ray-project#55063. --------- Signed-off-by: Ibrahim Rabbani <irabbani@anyscale.com> Co-authored-by: Edward Oakes <ed.nmi.oakes@gmail.com> Signed-off-by: Future-Outlier <eric901201@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR stacks on #56352 .
For more details about the resource isolation project see #54703.
This PR the makes the raylet move the system processes into the system cgroup on startup if resource isolation is enabled.
It introduces the following
--system-pidswhich is a comma-separated string of pids of system processes that are started before the raylet. As of today, it containsray.init) and the Ray CLI (ray --start)There are a few rough edges (I've added a comment on the PR where relevant):
self.all_processesinnode.py. It should contain all processes started so far, but there's no guarantee.