Adaptive min max#1797
Merged
mrocklin merged 10 commits intodask:masterfrom Mar 3, 2018
Merged
Conversation
This was referenced Mar 2, 2018
Merged
Member
Author
|
There are still some issues with this. Maximum isn't implemented. |
547c602 to
ad52435
Compare
This makes the cluster wait a few beats between thinking that a worker should maybe be removed and actually removing it. This helps to avoid unnecessary churn.
ff1a45b to
2bed8d2
Compare
Member
|
Thanks @mrocklin - I'm looking forward to using this! |
azjps
added a commit
to azjps/dask-drmaa
that referenced
this pull request
Mar 15, 2018
- Support passing kwargs to distributed.Adaptive, which now includes arguments like minimum and maximum [number of workers]. - Add an option workers argument to _retire_workers() to match dask/distributed#1797
azjps
added a commit
to azjps/dask-drmaa
that referenced
this pull request
Mar 15, 2018
- Support passing kwargs to distributed.Adaptive.__init__, which now takes keyword arguments like minimum and maximum [number of workers]. - Add an optional workers argument to _retire_workers() to match dask/distributed#1797 -- currently Adaptive raises a TypeError.
jakirkham
pushed a commit
to dask/dask-drmaa
that referenced
this pull request
Mar 24, 2018
* Compatibility fixes with distributed 1.21.3 - Support passing kwargs to distributed.Adaptive.__init__, which now takes keyword arguments like minimum and maximum [number of workers]. - Add an optional workers argument to _retire_workers() to match dask/distributed#1797 -- currently Adaptive raises a TypeError. * Adaptive memory resource compatibility fix for distributed==1.21.0 In dask/distributed#1594, the scheduler's internal maps of task objects were changed from using their keys to using TaskState objects. However, dask_drmaa.Adaptive was still querying for keys, causing new workers to never find the memory resource constraints for pending tasks and consequently tasks to never find workers with sufficient resources. This was causing the unit test test_adaptive_memory to wait indefinitely. Try to fix this to support both distributed pre- and post- 1.21.0, and un-skip test_adaptive_memory. * basestring -> six.string_types (Was testing on python2, switching to python2/3-compatible) * Add six to requirements.txt Also a couple of miscellaneous comments, including Windows-specific comment for running docker-based tests. * Undo windows comments (moving to a separate PR) * Drop support for distributed < 1.21.0 Update requirements.txt to require distributed >= 1.21.0, since there are some internal changes in the way tasks are stored. Also drop the corresponding backwards- compatibility fixes. Feel free to revert if distributed 1.20.x support if desired.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Replaces #1618 cc @jacobtomlinson @jhamman
This adds minimum and maximum keywords to the
Adaptiveclass to set limits on available workers.