Conversation
There was a problem hiding this comment.
Pull Request Overview
This PR adds support for configurable inference endpoints by introducing an endpoint input parameter. This allows users to specify custom model endpoints, particularly for organization-specific models, instead of being hardcoded to the default GitHub AI inference endpoint.
- Added
endpointinput parameter to action configuration with a default value - Updated the OpenAI client initialization to use the configurable endpoint
- Added documentation for the new input parameter in the README
Reviewed Changes
Copilot reviewed 3 out of 6 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| src/index.ts | Updates OpenAI client to use configurable endpoint input instead of hardcoded URL |
| action.yml | Adds new endpoint input with description and default value |
| README.md | Documents the new endpoint parameter in the configuration table |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| const openai = new OpenAI({ | ||
| apiKey: token, | ||
| baseURL: 'https://models.github.ai/inference' | ||
| baseURL: endpoint | ||
| }) |
There was a problem hiding this comment.
The endpoint input should be validated to ensure it's not empty before using it as baseURL. If the input is empty, it should fall back to the default value or throw a meaningful error.
|
@sgoedecke or @jalafel, could you take a look at this? |
|
LGTM, sorry about the delayed response. |
Based on the same input for
actions/ai-inference, this is needed to allow for pointing at an org-specific models endpoint.