Add model parameters temperature and topP to action inputs#168
Merged
stephaniegiang merged 4 commits intoactions:mainfrom Feb 4, 2026
Merged
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
Adds support for configuring LLM sampling parameters (temperature and top-p) via GitHub Action inputs, while keeping YAML prompt modelParameters as the higher-precedence source.
Changes:
- Added
temperatureandtop-pinputs toaction.yml. - Read and parse
temperature/top-pinputs insrc/main.ts, with prompt YAMLmodelParameterstaking precedence. - Updated
README.mdand rebuiltdist/index.jsto reflect the new inputs/behavior.
Reviewed changes
Copilot reviewed 3 out of 5 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| src/main.ts | Parses new sampling inputs and forwards them into the inference request with YAML precedence. |
| dist/index.js | Compiled bundle updated to include the new input parsing and request wiring. |
| action.yml | Declares new action inputs temperature and top-p. |
| README.md | Documents the new inputs in the Inputs table. |
Comments suppressed due to low confidence (1)
src/main.ts:86
- New behavior (action inputs
temperature/top-pand YAML-precedence) isn’t covered by existingsrc/main.tstests. Add test cases to verify: (1) values are parsed and passed through when set via action inputs, (2) YAMLmodelParameters.temperature/topPoverride action inputs, and (3) invalid numeric inputs are handled as expected.
// Get temperature and topP (prompt YAML modelParameters takes precedence over action inputs)
const temperatureInput = core.getInput('temperature')
const topPInput = core.getInput('top-p')
const temperature =
promptConfig?.modelParameters?.temperature ?? (temperatureInput !== '' ? parseFloat(temperatureInput) : undefined)
const topP = promptConfig?.modelParameters?.topP ?? (topPInput !== '' ? parseFloat(topPInput) : undefined)
// Parse custom headers
const customHeadersInput = core.getInput('custom-headers')
const customHeaders = parseCustomHeaders(customHeadersInput)
// Build the inference request with pre-processed messages and response format
const inferenceRequest = buildInferenceRequest(
promptConfig,
systemPrompt,
prompt,
modelName,
temperature,
topP,
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
stephaniegiang
approved these changes
Feb 4, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR adds support for configuring the sampling parameters
temperatureandtop-pfor model inference.Related Issues: #38