Eliminate startup delay when slow-starting LLM completion provider is configured#14910
Merged
krassowski merged 3 commits intoipython:mainfrom May 31, 2025
Merged
Eliminate startup delay when slow-starting LLM completion provider is configured#14910krassowski merged 3 commits intoipython:mainfrom
krassowski merged 3 commits intoipython:mainfrom
Conversation
3b57695 to
c0a0d7c
Compare
There was a problem hiding this comment.
Pull Request Overview
This PR reduces startup delays when configuring a slow-starting LLM completion provider by delaying its instantiation until the first request, and updates import paths to the new jupyter_ai_magics package.
- Add a test (
test_slow_llm_provider_should_not_block_init) to verify that setting the provider class doesn’t block startup. - Introduce lazy initialization via an
_init_llm_providerconstructor reference, replacing eager instantiation. - Update import paths in tests, examples, and core code to reference
jupyter_ai_magics.
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| tests/test_shortcuts.py | Added performance test for non-blocking provider configuration |
| tests/fake_llm.py | Added SlowStartingCompletionProvider and updated BaseProvider import |
| examples/auto_suggest_llm.py | Updated import paths to jupyter_ai_magics |
| IPython/terminal/shortcuts/auto_suggest.py | Replaced eager _llm_provider with lazy _init_llm_provider and instance property |
| IPython/terminal/interactiveshell.py | Deferred provider construction by assigning an init function |
Comments suppressed due to low confidence (4)
IPython/terminal/shortcuts/auto_suggest.py:359
- This import still references the old
jupyter_aipath. Update it to the newjupyter_ai_magics.models.completionmodule for consistency.
import jupyter_ai.completions.models as jai_models
IPython/terminal/interactiveshell.py:518
- Typo in comment: 'Provisinal' should be 'Provisional'.
# Provisinal in 8.32
IPython/terminal/shortcuts/auto_suggest.py:176
- The
Callabletype is used in the class annotation but not imported; addfrom typing import Callableto avoid a NameError.
_init_llm_provider: Callable | None
IPython/terminal/interactiveshell.py:511
- The indentation of the
def init_llm_provider()block is inconsistent and may cause anIndentationError. Align indentation levels to match the surrounding code.
def init_llm_provider():
meeseeksmachine
pushed a commit
to meeseeksmachine/ipython
that referenced
this pull request
May 31, 2025
… LLM completion provider is configured
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Results with
ExampleCompletionProvider:Details
Before
Benchmark 1: ipython --TerminalInteractiveShell.llm_provider_class=examples.auto_suggest_llm.ExampleCompletionProvider -c exit Time (mean ± σ): 2.189 s ± 0.019 s [User: 3.848 s, System: 0.286 s] Range (min … max): 2.171 s … 2.226 s 10 runsAfter
Benchmark 1: ipython --TerminalInteractiveShell.llm_provider_class=examples.auto_suggest_llm.ExampleCompletionProvider -c exit Time (mean ± σ): 623.9 ms ± 14.7 ms [User: 480.7 ms, System: 130.6 ms] Range (min … max): 600.0 ms … 650.6 ms 10 runs