Skip to content

Add: ollama support#4

Merged
3v0k4 merged 4 commits into3v0k4:mainfrom
waikoo:main
Nov 5, 2024
Merged

Add: ollama support#4
3v0k4 merged 4 commits into3v0k4:mainfrom
waikoo:main

Conversation

@waikoo
Copy link
Contributor

@waikoo waikoo commented Oct 7, 2024

I got it as far for it to paste the command in the meantime in the :ex box but when pressing the :ex box just disappears and the command doesn't get executed.

@3v0k4
Copy link
Owner

3v0k4 commented Oct 7, 2024

I'm a bit busy, but I promise I'll look into it this week! 🙏

@waikoo
Copy link
Contributor Author

waikoo commented Oct 8, 2024

Forgot to add the config:

  {
    '3v0k4/exit.nvim',
    config = function()
      require('exit').setup({ model = 'llama3.2:3b' })

      vim.api.nvim_create_user_command('ExitOllama', function(opts)
        local prompt = table.concat(opts.fargs, " ")
        require('exit').prompt(prompt)
      end, { nargs = "*" })
    end
  }

Copy link
Owner

@3v0k4 3v0k4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks awesome, thanks for taking the time! 🙏

At some point, we should dynamically check what models are available in Ollama, but we can leave it out of the scope of this PR and just hardcode them for now like you already did.

I reviewed on a high level because I think you were mostly trying to figure out the structure and left some clean ups for later.

Happy to do a second code review when you are ready.

Copy link
Owner

@3v0k4 3v0k4 Oct 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to recommend a change that should make our lives simpler:

If we keep using the convention ADAPTER_NAME:MODEL_NAME, this file wouldn't require (most) changes.

For openai, you'd pass something like openai:gpt-3.5-turbo, for ollama something like ollama:llama3.2:3b.

If we replace utils.split with utils.split_once (so that ollama:llama3.2:3b is split into {"ollama", "llama3.2:3b"}, everything else remains the same in this file.

And that would be great as this file should not know anything about the specific adapters, just that they are available in adapters_by_name and that they implement some methods like prompt.

lua/ollama.lua Outdated
if not prompt then
error("Prompt is nil; please provide a valid prompt.")
end
local data = vim.fn.json_encode({ model = model, prompt = Module.appendPrompt(prompt), stream = false })
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
local data = vim.fn.json_encode({ model = model, prompt = Module.appendPrompt(prompt), stream = false })
local data = vim.fn.json_encode({ model = model, prompt = Module.append_prompt(prompt), stream = false })

lua/ollama.lua Outdated
Comment on lines +72 to +75
local clean_response = response.response:gsub("v:null$", "")

-- Use the cleaned response
require('ollama').insert_command(clean_response)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we return the nvim command, we can remove insert_command from this file because exit.lua#prompt would take care of that (via vim.api.nvim_feedkeys).

Here's some (dirty) code to explain what I mean:

Suggested change
local clean_response = response.response:gsub("v:null$", "")
-- Use the cleaned response
require('ollama').insert_command(clean_response)
local cmd = response.response:gsub("v:null$", "")
cmd = tostring(cmd):gsub("^%s*(.-)%s*$", "%1")
return cmd

lua/ollama.lua Outdated
end
local data = vim.fn.json_encode({ model = model, prompt = Module.appendPrompt(prompt), stream = false })
local command = 'curl -s -X POST http://localhost:11434/api/generate -d \'' .. data .. '\''
print("Prompting..")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be good to add the model, so that the user can double check it's the right one.

Something like:

print("Prompting " .. Module.name .. ":" .. model .. "..")

lua/exit.lua Outdated
local cmd = Module.options.adapter.prompt(Module.options.model_name, prompt)
local escaped = vim.fn.escape(cmd, '\\\"')
vim.api.nvim_feedkeys(escaped, "n", {})
if Module.options.adapter == 'ollama' then
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should remove this ifs at some point, but I think it wasn't working because it should be:

Suggested change
if Module.options.adapter == 'ollama' then
if Module.options.adapter.name == 'ollama' then

lua/exit.lua Outdated
if Module.options.adapter == 'ollama' then
return
end
if Module.options.adapter == 'openai' then
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if Module.options.adapter == 'openai' then
if Module.options.adapter.name == 'openai' then

@waikoo
Copy link
Contributor Author

waikoo commented Oct 10, 2024

The changes you noted are on point.
Not being much of a PR master I pushed some new changes to my own remote repo that I noticed appear in the 'Commits' section of this conversation. Is this how it's supposed to be done?
PS: It works for me now locally. The only thing I did extra besides the points you mentioned was to extract the whitespace trimmer line into a utils method. I no longer found the v:null extractor to be necessary given the new setup.
Thanks for your time and being so thorough.

At some point, we should dynamically check what models are available in Ollama, but we can leave it out of the scope of this PR and just hardcode them for now like you already did.

Okay.

Copy link
Owner

@3v0k4 3v0k4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work. I have a few more nitpicks but it looks good!

Please add the new models to the Valid keys for {model_id}: and Valid keys for {opts.model}: lists in both the README.md and exit.txt.

Thanks for your effort 🙏

lua/utils.lua Outdated
Comment on lines +80 to +82
-- First part before the separator
table.insert(array, string.sub(string_, 1, start_index - 1))
-- Second part after the separator
Copy link
Owner

@3v0k4 3v0k4 Oct 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You did a great job with naming variables, so I think we can remove the comments:

Suggested change
-- First part before the separator
table.insert(array, string.sub(string_, 1, start_index - 1))
-- Second part after the separator
table.insert(array, string.sub(string_, 1, start_index - 1))

lua/utils.lua Outdated
-- Second part after the separator
table.insert(array, string.sub(string_, end_index + 1))
else
-- No separator found, return the whole string as one item in the array
Copy link
Owner

@3v0k4 3v0k4 Oct 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
-- No separator found, return the whole string as one item in the array

Comment on lines +17 to +19
Return only the command to be executed as a raw string, no string delimiters
wrapping it, no yapping, no markdown, no fenced code blocks, what you return
will be passed to vim directly.
Copy link
Owner

@3v0k4 3v0k4 Oct 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is anecdotal based on a couple of tests, the following will prevent the model from returning multiple lines:

Suggested change
Return only the command to be executed as a raw string, no string delimiters
wrapping it, no yapping, no markdown, no fenced code blocks, what you return
will be passed to vim directly.
Return only the command to be executed as a raw string, no string delimiters
wrapping it, no yapping, no markdown, no fenced code blocks, one line, what
you return will be passed to vim directly.

lua/ollama.lua Outdated
Comment on lines +37 to +48
Module.insert_command = function(cmd)
-- Ensure cmd is a string and trim any whitespace
cmd = tostring(cmd):gsub("^%s*(.-)%s*$", "%1")

local escaped = vim.fn.escape(cmd, '\\\"')

local no_colon = string.gsub(escaped, ":", "")

vim.fn.input(":", no_colon)
end


Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not used anywhere:

Suggested change
Module.insert_command = function(cmd)
-- Ensure cmd is a string and trim any whitespace
cmd = tostring(cmd):gsub("^%s*(.-)%s*$", "%1")
local escaped = vim.fn.escape(cmd, '\\\"')
local no_colon = string.gsub(escaped, ":", "")
vim.fn.input(":", no_colon)
end

lua/ollama.lua Outdated
Comment on lines +30 to +35
Module.append_prompt = function(prompt)
if not prompt then
error("Prompt is nil; please provide a valid prompt.")
end
return system_prompt .. prompt
end
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The if condition is redundant because already checked in prompt.

Maybe let's inline system_prompt .. prompt since this function doesn't do much?

lua/ollama.lua Outdated
Comment on lines +50 to +52
if not prompt then
error("Prompt is nil; please provide a valid prompt.")
end
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I liked that in a previous version you had this at the beginning of exit.lua#prompt as:

  if not prompt or prompt == "" then
    error("Prompt is nil; please provide a valid prompt.")
  end

It makes sense as it would cover an empty prompt for all the adapters.

lua/ollama.lua Outdated
Comment on lines +60 to +72
if model:match("openai") then
if not response.choices or #response.choices == 0 then
error("No choices received in the response.")
end
return response.choices[1].message.content -- OpenAI response handling
else
if not response.response then
error("No response received in the response.")
end

local cmd = utils.trim_whitespace(response.response)
return cmd
end
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the adapter is openai, we won't ever be here. I think we can boil this down to:

Suggested change
if model:match("openai") then
if not response.choices or #response.choices == 0 then
error("No choices received in the response.")
end
return response.choices[1].message.content -- OpenAI response handling
else
if not response.response then
error("No response received in the response.")
end
local cmd = utils.trim_whitespace(response.response)
return cmd
end
if not response.response then
error("No response received in the response.")
end
return response.response

And utils.trim_whitespace could be performed in exit.lua#prompt as it's useful for all adapters.

BTW, in what cases is not response.response true?

@waikoo
Copy link
Contributor Author

waikoo commented Oct 11, 2024

BTW, in what cases is not response.response true?

Are you referring to utils.system() already handling the case when an error is thrown? After it gets out of utils.system, it's guaranteed that it already has a response. If it doesn't have a response, an error is thrown already. So we don't really need the not response.response check in ollama.lua#prompt, right?

What do you think about adding "Is ollama serve running?" to the Failed to run curl error message? I think it might be helpful to someone reading it and realizing maybe there's something they can do about it.

I tweaked utils.split_once() a bit because it was only returning the part before the first : from the model_id. Now it returns both ollama and : + llama3.2:3b.

So now the returned command is executed as soon as it's returned. Is that how it was intended as? Or should it just print the cmd out so that the user hits ?

Thanks a lot for your help in this!

@3v0k4
Copy link
Owner

3v0k4 commented Oct 14, 2024

Are you referring to utils.system() already handling the case when an error is thrown? After it gets out of utils.system, it's guaranteed that it already has a response. If it doesn't have a response, an error is thrown already. So we don't really need the not response.response check in ollama.lua#prompt, right?

utils.lua#system just checks the exit code of the curl command. If the API returned an error, you should check inside the adapter (similarly to openai.luaif response.error then error(response.error.message) end`).

After a quick glance (please double check), I don't see in the ollama docs any mention of errors. If that's the case, we don't need to check for errors in the adapter.

What do you think about adding "Is ollama serve running?" to the Failed to run curl error message? I think it might be helpful to someone reading it and realizing maybe there's something they can do about it.

I see that you added it to the code. Great idea!

So now the returned command is executed as soon as it's returned. Is that how it was intended as? Or should it just print the cmd out so that the user hits ?

It should just fill the command and let the user decide. The command could return !rm -rf /, so it would be problematic if it executed.

Do you have a prompt that triggers that behavior? With what model?

Copy link
Owner

@3v0k4 3v0k4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a small detail.

This looks great. Fantastic job!

Co-authored-by: Riccardo <riccardo.odone@gmail.com>
@waikoo
Copy link
Contributor Author

waikoo commented Oct 18, 2024

After a quick glance (please double check), I don't see in the ollama docs any mention of errors. If that's the case, we don't need to check for errors in the adapter.

True, there's only 1 mention of error with embedding generation, something I don't understand but something we don't use as far as I can tell.

Do you have a prompt that triggers that behavior? With what model?

Yes, using llama3.2:3b automatically accepted the answer, upon asking: "how to quit vim?"

This looks great. Fantastic job!

Thanks for your guidance! 👍

@3v0k4
Copy link
Owner

3v0k4 commented Oct 18, 2024

Yes, using llama3.2:3b automatically accepted the answer, upon asking: "how to quit vim?"

This is definitely something we should look into.

But it's good we have a way to deterministically reproduce.

I'll be off for the next couple of weeks, so I'm unable to help with this.

@3v0k4
Copy link
Owner

3v0k4 commented Nov 5, 2024

I'm back!

Yes, using llama3.2:3b automatically accepted the answer, upon asking: "how to quit vim?"

I tried to reproduce with the following but I cannot (ie, :q is filled and I need to press enter to execute it):

:ExitModel ollama:llama3.2:3b
:Exit how to quit vim?

I'm merging this, thanks for the contribution! 🙏

Please feel free to open a separate issue/pr if you find a way to reproduce the issue mentioned above or have a solution.

@3v0k4 3v0k4 merged commit 6471f84 into 3v0k4:main Nov 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants