Conversation
|
I'm a bit busy, but I promise I'll look into it this week! 🙏 |
|
Forgot to add the config: {
'3v0k4/exit.nvim',
config = function()
require('exit').setup({ model = 'llama3.2:3b' })
vim.api.nvim_create_user_command('ExitOllama', function(opts)
local prompt = table.concat(opts.fargs, " ")
require('exit').prompt(prompt)
end, { nargs = "*" })
end
}
|
There was a problem hiding this comment.
This looks awesome, thanks for taking the time! 🙏
At some point, we should dynamically check what models are available in Ollama, but we can leave it out of the scope of this PR and just hardcode them for now like you already did.
I reviewed on a high level because I think you were mostly trying to figure out the structure and left some clean ups for later.
Happy to do a second code review when you are ready.
There was a problem hiding this comment.
I'd like to recommend a change that should make our lives simpler:
If we keep using the convention ADAPTER_NAME:MODEL_NAME, this file wouldn't require (most) changes.
For openai, you'd pass something like openai:gpt-3.5-turbo, for ollama something like ollama:llama3.2:3b.
If we replace utils.split with utils.split_once (so that ollama:llama3.2:3b is split into {"ollama", "llama3.2:3b"}, everything else remains the same in this file.
And that would be great as this file should not know anything about the specific adapters, just that they are available in adapters_by_name and that they implement some methods like prompt.
lua/ollama.lua
Outdated
| if not prompt then | ||
| error("Prompt is nil; please provide a valid prompt.") | ||
| end | ||
| local data = vim.fn.json_encode({ model = model, prompt = Module.appendPrompt(prompt), stream = false }) |
There was a problem hiding this comment.
| local data = vim.fn.json_encode({ model = model, prompt = Module.appendPrompt(prompt), stream = false }) | |
| local data = vim.fn.json_encode({ model = model, prompt = Module.append_prompt(prompt), stream = false }) |
lua/ollama.lua
Outdated
| local clean_response = response.response:gsub("v:null$", "") | ||
|
|
||
| -- Use the cleaned response | ||
| require('ollama').insert_command(clean_response) |
There was a problem hiding this comment.
If we return the nvim command, we can remove insert_command from this file because exit.lua#prompt would take care of that (via vim.api.nvim_feedkeys).
Here's some (dirty) code to explain what I mean:
| local clean_response = response.response:gsub("v:null$", "") | |
| -- Use the cleaned response | |
| require('ollama').insert_command(clean_response) | |
| local cmd = response.response:gsub("v:null$", "") | |
| cmd = tostring(cmd):gsub("^%s*(.-)%s*$", "%1") | |
| return cmd |
lua/ollama.lua
Outdated
| end | ||
| local data = vim.fn.json_encode({ model = model, prompt = Module.appendPrompt(prompt), stream = false }) | ||
| local command = 'curl -s -X POST http://localhost:11434/api/generate -d \'' .. data .. '\'' | ||
| print("Prompting..") |
There was a problem hiding this comment.
I think it would be good to add the model, so that the user can double check it's the right one.
Something like:
print("Prompting " .. Module.name .. ":" .. model .. "..")
lua/exit.lua
Outdated
| local cmd = Module.options.adapter.prompt(Module.options.model_name, prompt) | ||
| local escaped = vim.fn.escape(cmd, '\\\"') | ||
| vim.api.nvim_feedkeys(escaped, "n", {}) | ||
| if Module.options.adapter == 'ollama' then |
There was a problem hiding this comment.
We should remove this ifs at some point, but I think it wasn't working because it should be:
| if Module.options.adapter == 'ollama' then | |
| if Module.options.adapter.name == 'ollama' then |
lua/exit.lua
Outdated
| if Module.options.adapter == 'ollama' then | ||
| return | ||
| end | ||
| if Module.options.adapter == 'openai' then |
There was a problem hiding this comment.
| if Module.options.adapter == 'openai' then | |
| if Module.options.adapter.name == 'openai' then |
|
The changes you noted are on point.
Okay. |
3v0k4
left a comment
There was a problem hiding this comment.
Great work. I have a few more nitpicks but it looks good!
Please add the new models to the Valid keys for {model_id}: and Valid keys for {opts.model}: lists in both the README.md and exit.txt.
Thanks for your effort 🙏
lua/utils.lua
Outdated
| -- First part before the separator | ||
| table.insert(array, string.sub(string_, 1, start_index - 1)) | ||
| -- Second part after the separator |
There was a problem hiding this comment.
You did a great job with naming variables, so I think we can remove the comments:
| -- First part before the separator | |
| table.insert(array, string.sub(string_, 1, start_index - 1)) | |
| -- Second part after the separator | |
| table.insert(array, string.sub(string_, 1, start_index - 1)) |
lua/utils.lua
Outdated
| -- Second part after the separator | ||
| table.insert(array, string.sub(string_, end_index + 1)) | ||
| else | ||
| -- No separator found, return the whole string as one item in the array |
There was a problem hiding this comment.
| -- No separator found, return the whole string as one item in the array |
| Return only the command to be executed as a raw string, no string delimiters | ||
| wrapping it, no yapping, no markdown, no fenced code blocks, what you return | ||
| will be passed to vim directly. |
There was a problem hiding this comment.
This is anecdotal based on a couple of tests, the following will prevent the model from returning multiple lines:
| Return only the command to be executed as a raw string, no string delimiters | |
| wrapping it, no yapping, no markdown, no fenced code blocks, what you return | |
| will be passed to vim directly. | |
| Return only the command to be executed as a raw string, no string delimiters | |
| wrapping it, no yapping, no markdown, no fenced code blocks, one line, what | |
| you return will be passed to vim directly. |
lua/ollama.lua
Outdated
| Module.insert_command = function(cmd) | ||
| -- Ensure cmd is a string and trim any whitespace | ||
| cmd = tostring(cmd):gsub("^%s*(.-)%s*$", "%1") | ||
|
|
||
| local escaped = vim.fn.escape(cmd, '\\\"') | ||
|
|
||
| local no_colon = string.gsub(escaped, ":", "") | ||
|
|
||
| vim.fn.input(":", no_colon) | ||
| end | ||
|
|
||
|
|
There was a problem hiding this comment.
This is not used anywhere:
| Module.insert_command = function(cmd) | |
| -- Ensure cmd is a string and trim any whitespace | |
| cmd = tostring(cmd):gsub("^%s*(.-)%s*$", "%1") | |
| local escaped = vim.fn.escape(cmd, '\\\"') | |
| local no_colon = string.gsub(escaped, ":", "") | |
| vim.fn.input(":", no_colon) | |
| end |
lua/ollama.lua
Outdated
| Module.append_prompt = function(prompt) | ||
| if not prompt then | ||
| error("Prompt is nil; please provide a valid prompt.") | ||
| end | ||
| return system_prompt .. prompt | ||
| end |
There was a problem hiding this comment.
The if condition is redundant because already checked in prompt.
Maybe let's inline system_prompt .. prompt since this function doesn't do much?
lua/ollama.lua
Outdated
| if not prompt then | ||
| error("Prompt is nil; please provide a valid prompt.") | ||
| end |
There was a problem hiding this comment.
I liked that in a previous version you had this at the beginning of exit.lua#prompt as:
if not prompt or prompt == "" then
error("Prompt is nil; please provide a valid prompt.")
endIt makes sense as it would cover an empty prompt for all the adapters.
lua/ollama.lua
Outdated
| if model:match("openai") then | ||
| if not response.choices or #response.choices == 0 then | ||
| error("No choices received in the response.") | ||
| end | ||
| return response.choices[1].message.content -- OpenAI response handling | ||
| else | ||
| if not response.response then | ||
| error("No response received in the response.") | ||
| end | ||
|
|
||
| local cmd = utils.trim_whitespace(response.response) | ||
| return cmd | ||
| end |
There was a problem hiding this comment.
If the adapter is openai, we won't ever be here. I think we can boil this down to:
| if model:match("openai") then | |
| if not response.choices or #response.choices == 0 then | |
| error("No choices received in the response.") | |
| end | |
| return response.choices[1].message.content -- OpenAI response handling | |
| else | |
| if not response.response then | |
| error("No response received in the response.") | |
| end | |
| local cmd = utils.trim_whitespace(response.response) | |
| return cmd | |
| end | |
| if not response.response then | |
| error("No response received in the response.") | |
| end | |
| return response.response |
And utils.trim_whitespace could be performed in exit.lua#prompt as it's useful for all adapters.
BTW, in what cases is not response.response true?
Are you referring to What do you think about adding "Is I tweaked So now the returned command is executed as soon as it's returned. Is that how it was intended as? Or should it just print the cmd out so that the user hits ? Thanks a lot for your help in this! |
After a quick glance (please double check), I don't see in the ollama docs any mention of errors. If that's the case, we don't need to check for errors in the adapter.
I see that you added it to the code. Great idea!
It should just fill the command and let the user decide. The command could return Do you have a prompt that triggers that behavior? With what model? |
3v0k4
left a comment
There was a problem hiding this comment.
Just a small detail.
This looks great. Fantastic job!
Co-authored-by: Riccardo <riccardo.odone@gmail.com>
True, there's only 1 mention of error with embedding generation, something I don't understand but something we don't use as far as I can tell.
Yes, using llama3.2:3b automatically accepted the answer, upon asking: "how to quit vim?"
Thanks for your guidance! 👍 |
This is definitely something we should look into. But it's good we have a way to deterministically reproduce. I'll be off for the next couple of weeks, so I'm unable to help with this. |
|
I'm back!
I tried to reproduce with the following but I cannot (ie, I'm merging this, thanks for the contribution! 🙏 Please feel free to open a separate issue/pr if you find a way to reproduce the issue mentioned above or have a solution. |
I got it as far for it to paste the command in the meantime in the :ex box but when pressing the :ex box just disappears and the command doesn't get executed.