-
Notifications
You must be signed in to change notification settings - Fork 6
Use builtin OpenAI plugin. Adds tool support. #18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I suspect the prior failures were due to the tests picking up my real auth'd model data (because they worked for me locally). I updated the three failing tests to inject mocked model data directly. |
|
@jmdaly can you kick off the workflow again to see if my changes fixed the 3 test failures? |
Done! |
|
@jmdaly Ah, apparently my existing auth config was allowing those three tests to pass. I'm pretty confident I've fixed those test issues now (an additional monkey patch in the auth path). I've also added a Dockerfile and test script that makes it easier to test in an uncontaminated environment. The script Could you kick off the workflow again? |
|
@jmdaly You might be interested in this post where I leverage llm and this branch of llm-github-copilot to show how to build your own coding agent: https://kanaka.github.io/blog/llm-agent-in-five-steps/ |
|
That's an awesome blog post!! Thanks for sharing :) I'm still hoping to review this PR and get it merged as soon as I can! |
The GitHub Copilot completions (`/chat/completions`) API is based on the OpenAI API regardless of whether the backend model is an OpenAI one or not. This means we can greatly simplify the GitHub Copilot plugin by using the builtin OpenAI plugin (`llm.default_plugins.openai_models`) for everything except for the Copilot device-based authentication. In addition to the significant simplification and code reduction, one key benefit of this approach is that we automatically get additional functionality that is implemented by the openai_models plugin such as tool calling, async, schema/structured output, etc. We should also get any future OpenAI plugin functionality automatically (or with minimal changes). Tests have been updated to remove most of the completion tests since this should now be covered by tests in OpenAI plugin itself. Minor updates were needed to the authentication and the CLI tests to get them to work simplified code base. Finally, the fetch_models_data top-level function (wraps _fetch_models_data) has been memoized so that the model data is cached after the first call.
Make sure the model data comes from the mocked data and not accidentally from an actual connection to GitHub Copilot. This updates these tests: test_prompt, test_model_variants, test_options.
Add a Dockerfile that packages up the code and tests so that they can be
executed in an isolated environment. This avoids environmental
contamination such as auth config/keys may impact certain code paths
succeed when they otherwise shouldn't.
Example usage:
docker build -t llm-github-copilot-test .
docker run -it llm-github-copilot-test
The _GithubCopilot class was inheriting from openai_models.Chat. The derived classes are where the actual inheritance/mixin should happen. The extra inherit seemed to be harmless for now but could cause problems in the future since it could mix Chat and AsyncChat functionality by accident.
|
@jmdaly I rebased to address the merge conflicts from the other PR merge. I also just pushed a tiny cleanup to the class inheritance that I discovered. The resulting diff is a little bit hard to read. Here's the tl;dr version:
|
|
This looks good to me! @cavanaug you've done a fair amount of work in this repo - do these changes look good to you? |
|
Also, removing almost 800 lines is awesome :) |
|
Let me look into this in general and test it a bit, but I really do like the concept of reusing an existing library and automatically gaining benefits. Though for tool calling we will want to stay in alignment with whatever Simon does with llm in general, we dont want to expose any additional potential security implications with just this plugin. |
|
@cavanaug Hopefully this should make it a lot easier to stay in sync with functionality in llm (including tools), because with this change, the plugin now only does auth( llm-github-copilot/llm_github_copilot.py Lines 476 to 509 in d9e1202
__init__.
|
|
I spent some time looking at this and testing it locally. LGTM. Great simplification!! |
|
Yay! @jmdaly when you have a chance can you update the version and push a release? Thanks for your work on this plugin! |
Yes definitely! I'll try to get it done this weekend :) Thanks again for your great contribution! |
|
Hi @kanaka, just wanted to let you know that version 0.3 with your improvements has been released to PyPi :) |
The GitHub Copilot completions (
/chat/completions) API is based on the OpenAI API regardless of whether the backend model is an OpenAI one or not. This means we can greatly simplify the GitHub Copilot plugin by using the builtin OpenAI plugin (llm.default_plugins.openai_models) for everything except for the Copilot device-based authentication.In addition to the significant simplification and code reduction, one key benefit of this approach is that we automatically get additional functionality that is implemented by the openai_models plugin such as tool calling, async, schema/structured output, etc. We should also get any future OpenAI plugin functionality automatically (or with minimal changes).
Tests have been updated to remove most of the completion tests since this should now be covered by tests in OpenAI plugin itself. Minor updates were needed to the authentication and the CLI tests to get them to work simplified code base.
Finally, the fetch_models_data top-level function (wraps _fetch_models_data) has been memoized so that the model data is cached after the first call.
NOTES: