Add ability to use ram disk for jenkins workspace#47746
Merged
brianseeders merged 9 commits intoelastic:masterfrom Oct 18, 2019
Merged
Add ability to use ram disk for jenkins workspace#47746brianseeders merged 9 commits intoelastic:masterfrom
brianseeders merged 9 commits intoelastic:masterfrom
Conversation
Contributor
|
Pinging @elastic/kibana-operations (Team:Operations) |
jbudz
reviewed
Oct 9, 2019
Contributor
|
Neat - have you seen any performance changes with all RAM? |
jbudz
reviewed
Oct 9, 2019
Contributor
Author
Unfortunately not... The disk performance was already pretty good because we were using such large disks (2TB). I think the windows of time where it really matters (like if several of the parallel ESes are doing IO-heavy work at the same time) are so short that it ends up not making a big difference overall. It should let us move from 2TB disks to ~50GB disks though, which should be a pretty good cost difference. |
Contributor
💔 Build Failed |
Contributor
💔 Build Failed |
Contributor
Author
|
@elasticmachine merge upstream |
Contributor
💚 Build Succeeded |
This was referenced Oct 10, 2019
Contributor
Author
|
@elasticmachine merge upstream |
jbudz
approved these changes
Oct 11, 2019
Contributor
💚 Build Succeeded |
Contributor
Author
|
@elasticmachine merge upstream |
Contributor
💔 Build Failed |
Contributor
Author
|
@elasticmachine merge upstream |
Contributor
💚 Build Succeeded |
Contributor
Author
|
@elasticmachine merge upstream |
Contributor
💚 Build Succeeded |
brianseeders
added a commit
to brianseeders/kibana
that referenced
this pull request
Oct 18, 2019
* Add ability to use ram disk for jenkins workspace * Re-combine ciGroup agents * Address some PR feedback / questions * Add --preserve-root
brianseeders
added a commit
to brianseeders/kibana
that referenced
this pull request
Oct 18, 2019
* Add ability to use ram disk for jenkins workspace * Re-combine ciGroup agents * Address some PR feedback / questions * Add --preserve-root
brianseeders
added a commit
to brianseeders/kibana
that referenced
this pull request
Oct 18, 2019
* Add ability to use ram disk for jenkins workspace * Re-combine ciGroup agents * Address some PR feedback / questions * Add --preserve-root
brianseeders
added a commit
to brianseeders/kibana
that referenced
this pull request
Oct 18, 2019
* Add ability to use ram disk for jenkins workspace * Re-combine ciGroup agents * Address some PR feedback / questions * Add --preserve-root
brianseeders
added a commit
that referenced
this pull request
Oct 21, 2019
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This work comes after seeing flakiness in builds that is possibly due to resource constraints around disk I/O.
This work puts the Jenkins workspace for our larger build agents into memory, instead of on disk. Elasticsearch is doing the same thing for at least some of their builds.
I have 36 builds of this on my Jenkins sandbox (it's on my sandbox because I needed to schedule it to run hourly) with no failures so far.
I originally tried using GCP local SSDs, but they made less of a difference than I had hoped, and would have been more difficult to fully implement, so I switched to trying in-memory.
The instances we use for functional tests currently have 120GB RAM, so this should work well until we switch to smaller machines or need significant more space for storing files. So far, I have not seen any of the instances come close to running out of memory during a build. The worst I've seen so far was about 40GB free, which is plenty of breathing room.
This should also let us shrink the root volume SSD for the instances, which should bring costs down quite a bit from where they are right now. They are currently 2TB per instance, for more performance.