In large solutions we return a large amount of LSP completions which take seconds to serialize/deserialize. @NTaylorMullen was seeing issues on list sizes of 2k-3k items, but it isn't unlikely that we would return even more, especially once unimported types completion is supported.
To resolve this, we should implement the IsInComplete flag from the LSP spec
https://microsoft.github.io/language-server-protocol/specification#textDocument_completion
This flag will allow us to cap the initial list request size to a reasonable size (starting at 1k items for now). Then as typing continues the client will re-request completions from the server and we can give 1k more (filtered down).
A few notes for the roslyn implementation.
- On initial request, we calculate the entire list, return the first 1k items from the list (in sorted order) plus the preselected item (if it exists), then cache the entire list for subsequent requests.
- For followup completion requests, when the trigger is TriggerForIncompleteCompletions we use the cached list and filter it down based on the typed characters to return the next 1k items. If it is not the incomplete trigger we consider it to be an initial request and use behavior from 1) and wipe the previous cache.
In large solutions we return a large amount of LSP completions which take seconds to serialize/deserialize. @NTaylorMullen was seeing issues on list sizes of 2k-3k items, but it isn't unlikely that we would return even more, especially once unimported types completion is supported.
To resolve this, we should implement the IsInComplete flag from the LSP spec
https://microsoft.github.io/language-server-protocol/specification#textDocument_completion
This flag will allow us to cap the initial list request size to a reasonable size (starting at 1k items for now). Then as typing continues the client will re-request completions from the server and we can give 1k more (filtered down).
A few notes for the roslyn implementation.