Skip to content

Check for valid model at runtime (instead of hardcoding) #5

@3v0k4

Description

@3v0k4

Currently, valid models are hardcoded in the adapter(s) and checked by exit.lua#set_model.

This requires us to release a new version of exit.nvim every time a provider (eg, openai) releases a new model.

One way to fix this would be to check only the validity of the adapter, allow any model, and check in the API response if an invalid model was used.

Example:

  • 0p3n4i:gpt-3.5-turbo should error and list the valid adapters (eg, openai, ollama)
  • openai:gp1-3.5-1urb0 should be allowed, but the request would fail with something like "model 'openai:gp1-3.5-1urb0' is not valid, please select a valid model"; a list of valid models could be returned or a URL to the docs where to find them
  • openai:gpt-3.5-turbo would work as usual

Another solution would be to contact an API endpoint when setting the model to check what models are available for that adapter.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions