golloom

package module
v0.0.0-...-8db06e2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 3, 2025 License: Apache-2.0 Imports: 10 Imported by: 0

README

Golloom

Golloom is a robust Go library designed to facilitate seamless integration and interaction with language models via Ollama. It provides developers with a comprehensive set of tools to manage models, generate prompts, and handle responses efficiently within Go applications.

  • Model Management: Easily pull and push models to and from the Ollama server, ensuring synchronization and version control.​
  • Prompt Generation: Craft and send prompts to Ollama language models, receiving structured responses tailored to your application's needs.​
  • Process Monitoring: Monitor the status of model processes, allowing for effective resource management and debugging.​
  • Version Retrieval: Fetch version information of the application to maintain compatibility and track updates from Ollama.

Installation

To incorporate Golloom into your Go project, use go get:

go get github.com/nthnn/golloom

Basic Usage

package main

import (
	"bufio"   // Package bufio implements buffered I/O.
	"context" // Package context defines the Context type, which carries deadlines, cancellation signals, and other request-scoped values.
	"fmt"     // Package fmt implements formatted I/O with functions analogous to C's printf and scanf.
	"log"     // Package log implements simple logging.
	"os"      // Package os provides a platform-independent interface to operating system functionality.
	"strings" // Package strings implements simple functions to manipulate UTF-8 encoded strings.

	"github.com/nthnn/golloom" // Importing the Golloom package for interacting with language models.
)

func main() {
	// Initialize a new Golloom client to interact with the Ollama language model server.
	// The client connects to the server at "http://localhost:11434" with a timeout of 12 minutes.
	client, err := golloom.NewClient("http://localhost:11434", 12)
	if err != nil {
		log.Fatalf("Error creating client: %v", err) // Log and exit if client creation fails.
	}

	var history []golloom.Message // Slice to store the history of messages exchanged in the chat.
	ctx := context.Background()   // Create a background context for the chat operations.

	reader := bufio.NewReader(os.Stdin) // Create a buffered reader to read input from the standard input (console).

	for {
		fmt.Print(">> ") // Display the prompt for user input.

		// Read the user's input until a newline character is encountered.
		input, err := reader.ReadString('\n')
		if err != nil {
			log.Fatalf("Error reading input: %v", err) // Log and exit if reading input fails.
		}
		input = strings.TrimSpace(input) // Trim any leading or trailing whitespace from the input.

		// Check if the user wants to exit the chat.
		if input == "exit" {
			break // Exit the loop and end the program.
		}

		// Append the user's message to the chat history.
		history = append(history, golloom.Message{
			Role:    "user", // Role of the message sender.
			Content: input,  // Content of the user's message.
		})

		// Create a new chat request with the current history of messages.
		chatReq := &golloom.Chat{
			Model:    "deepseek-r1:14b", // Specify the model to use for generating responses.
			Messages: history,           // Include the chat history in the request.
		}

		// Send the chat request to the server and receive a response.
		chatResp, err := client.Chat(ctx, chatReq)
		if err != nil {
			log.Fatalf("Chat error: %v", err) // Log and exit if the chat request fails.
		}

		// Extract the assistant's message from the chat response.
		assistantMessage := chatResp.Message.Content
		fmt.Println(assistantMessage) // Display the assistant's response to the user.

		// Append the assistant's message to the chat history.
		history = append(history, golloom.Message{
			Role:    "assistant",      // Role of the message sender.
			Content: assistantMessage, // Content of the assistant's message.
		})
	}
}

License

Copyright 2025 Nathanne Isip

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Chat

type Chat struct {
	// Model indicates which language model should process the chat request.
	Model string `json:"model"`
	// Messages is a slice of Message structs representing the conversation history.
	Messages []Message `json:"messages"`

	// Format is an optional parameter that defines the desired format of the response.
	// It can be specified as a string or a more complex structure.
	Format interface{} `json:"format,omitempty"`
	// Options provides additional optional settings as key-value pairs for the chat session.
	Options map[string]interface{} `json:"options,omitempty"`
	// Tools lists optional tools (as maps) that may be utilized during the conversation.
	Tools []map[string]interface{} `json:"tools,omitempty"`

	// Stream is an optional flag that, when set to true, requests the server to stream the response.
	Stream *bool `json:"stream,omitempty"`
	// KeepAlive specifies an optional mechanism or duration to maintain the connection during streaming.
	KeepAlive string `json:"keep_alive,omitempty"`
}

Chat encapsulates the data required to initiate a conversation with the language model. It includes the target model, a history of exchanged messages, and various optional configuration settings.

type Client

type Client struct {
	// BaseURL is the base URL of the server to which requests will be sent.
	BaseURL *url.URL
	// HTTPClient is the HTTP client used to make requests. Its configuration (e.g., timeout)
	// can be set during client initialization.
	HTTPClient *http.Client
}

Client is a struct that encapsulates the configuration and HTTP client used to interact with the Ollama server (or similar API endpoints). It holds the base URL for the API and an HTTP client instance with a configurable timeout.

func NewClient

func NewClient(
	baseURL string,
	minutes time.Duration,
) (*Client, error)

NewClient creates a new instance of Client configured to communicate with the Ollama server. It takes a base URL as a string and a duration (in minutes) for setting the HTTP client's timeout. Parameters:

  • baseURL: A string representing the server's base URL.
  • minutes: A time.Duration value (in minutes) that sets the timeout for HTTP requests.

Returns:

  • A pointer to a Client instance properly configured with the base URL and HTTP client.
  • An error if the provided baseURL cannot be parsed.

func (*Client) Chat

func (c *Client) Chat(
	ctx context.Context,
	req *Chat,
) (*ModelResponse, error)

Chat sends a chat request to the Ollama server and retrieves the model's response. It constructs the full API endpoint URL and delegates the HTTP POST request to the sendChatRequest helper function. Parameters:

  • ctx: A context for controlling cancellation and timeouts for the request.
  • req: A pointer to a Chat struct containing the conversation history and optional configuration.

Returns:

  • A pointer to a ModelResponse struct that encapsulates the response and related metadata.
  • An error if the HTTP request or response decoding fails.

func (*Client) CheckBlobExists

func (c *Client) CheckBlobExists(
	ctx context.Context,
	digest string,
) (bool, error)

CheckBlobExists checks if a blob with the given digest exists on the server. It sends a HEAD request to the /api/blobs/{digest} endpoint. Parameters:

  • ctx: A context to control request lifetime (e.g., cancellation).
  • digest: A string representing the blob's digest or identifier.

Returns:

  • A boolean value: true if the blob exists, false otherwise.
  • An error if the request fails or if the digest is invalid.

func (*Client) CopyModel

func (c *Client) CopyModel(
	ctx context.Context,
	source, destination string,
) (*CopyModelResult, error)

CopyModel initiates the copying of a model from a source to a destination within the server. It constructs the appropriate API endpoint and sends a POST request with the source and destination parameters. Parameters:

  • ctx: A context.Context object for managing request deadlines and cancellations.
  • source: The name or identifier of the source model to be copied.
  • destination: The target name or identifier where the model should be copied to.

Returns:

  • A pointer to a CopyModelResult struct containing status messages from the operation.
  • An error if the request fails or the server returns an unexpected response.

func (*Client) CreateModel

func (c *Client) CreateModel(
	ctx context.Context,
	req *CreateModelRequest,
) (*CreateModelResult, error)

CreateModel sends a request to create a new model on the server using the provided configuration. It constructs the appropriate API endpoint and sends a POST request with the creation parameters. Parameters:

  • ctx: A context.Context object for managing request deadlines and cancellations.
  • req: A pointer to a CreateModelRequest struct containing the model creation parameters.

Returns:

  • A pointer to a CreateModelResult struct containing status messages from the operation.
  • An error if the request fails or the server returns an unexpected response.

func (*Client) DeleteModel

func (c *Client) DeleteModel(
	ctx context.Context,
	req *DeleteModelRequest,
) (*DeleteModelResult, error)

DeleteModel sends a request to delete a model from the server. It constructs the API endpoint URL for deletion and issues a POST request with the provided DeleteModelRequest. Parameters:

  • ctx: A context.Context for controlling cancellation and timeouts during the HTTP request.
  • req: A pointer to a DeleteModelRequest containing the model identifier to be deleted.

Returns:

  • A pointer to a DeleteModelResult containing the server's status messages about the deletion.
  • An error if the request fails or the server returns an error.

func (*Client) Embed

func (c *Client) Embed(
	ctx context.Context,
	model, input string,
	options map[string]interface{},
) (*EmbedResult, error)

Embed sends a request to generate an embedding for the given input using the specified model and options. It constructs the appropriate API endpoint and sends a POST request with the necessary parameters. Parameters:

  • ctx: A context.Context object for managing request deadlines and cancellations.
  • model: The name or identifier of the model to use for generating the embedding.
  • input: The input data for which the embedding is to be generated.
  • options: A map of additional options to customize the embedding process.

Returns:

  • A pointer to an EmbedResult struct containing the embedding and related metadata.
  • An error if the request fails or the server returns an unexpected response.

func (*Client) FetchModelInfo

func (c *Client) FetchModelInfo(
	ctx context.Context,
	model string,
	verbose bool,
) (*ModelInfoResult, error)

FetchModelInfo retrieves information about a specific model from the server. It sends a POST request to the "/api/show" endpoint with the model name and verbosity flag. The server's response is decoded into a ModelInfoResult struct.

Parameters:

  • ctx: The context.Context for managing request deadlines and cancellations.
  • model: The name or identifier of the model whose information is being requested.
  • verbose: A boolean flag indicating whether to request detailed information (true) or basic information (false).

Returns:

  • A pointer to a ModelInfoResult containing the model's information.
  • An error if the request fails or the response cannot be processed.

func (*Client) Generate

func (c *Client) Generate(
	ctx context.Context,
	req *PromptInfo,
) (*PromptResult, error)

Generate sends a prompt generation request to the server, validates the PromptInfo, and returns the generated PromptResult.

func (*Client) ListModels

func (c *Client) ListModels(ctx context.Context) (*ModelList, error)

ListModels retrieves a list of available machine learning models from the server. It constructs the appropriate API endpoint, sends a GET request, and decodes the JSON response into a ModelList.

Parameters:

  • ctx: The context.Context for managing request deadlines and cancellations.

Returns:

  • A pointer to a ModelList containing metadata about available models.
  • An error if the request fails or the response cannot be processed.

func (*Client) ProcessStatus

func (c *Client) ProcessStatus(
	ctx context.Context,
) (*ModelProcessStatus, error)

ProcessStatus retrieves the current processing status of models from the server. It constructs the appropriate API endpoint, sends a GET request, and decodes the JSON response into a ModelProcessStatus.

Parameters:

  • ctx: The context.Context for managing request deadlines and cancellations.

Returns:

  • A pointer to a ModelProcessStatus containing the list of models being processed.
  • An error if the request fails or the response cannot be processed.

func (*Client) PullModel

func (c *Client) PullModel(
	ctx context.Context,
	model string,
) (*PullModelResult, error)

PullModel initiates a request to pull a specified model from the server. It constructs the appropriate URL, sends the request, and returns the status messages received in response.

func (*Client) PushBlob

func (c *Client) PushBlob(
	ctx context.Context,
	digest string,
	file io.Reader,
) error

PushBlob uploads a blob to the server. It sends a POST request with the blob data to the /api/blobs/{digest} endpoint. Parameters:

  • ctx: A context to control request lifetime (e.g., cancellation).
  • digest: A string representing the blob's digest or identifier.
  • file: An io.Reader that provides the blob's data.

Returns:

  • An error if the upload fails.

func (*Client) PushModel

func (c *Client) PushModel(
	ctx context.Context,
	model string,
) (*PushModelResult, error)

PushModel initiates a request to push a specified model to the server. It constructs the appropriate URL, sends the request, and returns the status messages received in response.

func (*Client) Version

func (c *Client) Version(ctx context.Context) (*Version, error)

Version retrieves the current version information of the application by sending a GET request to the /api/version endpoint. It returns a Version struct containing the version details.

type CopyModelResult

type CopyModelResult struct {
	// StatusMessages contains a list of messages detailing the status or
	// progress of the copy operation.
	StatusMessages []string `json:"status_messages"`
}

CopyModelResult encapsulates the outcome of a model copying operation, including any status messages returned during the process.

type CreateModelRequest

type CreateModelRequest struct {
	Model      string                 `json:"model"`                // The name or identifier for the new model.
	From       string                 `json:"from,omitempty"`       // Optional. The source model to base the new model on.
	Files      map[string]string      `json:"files,omitempty"`      // Optional. A map of file names to their content, used in model creation.
	Adapters   map[string]string      `json:"adapters,omitempty"`   // Optional. A map specifying adapter configurations.
	Template   string                 `json:"template,omitempty"`   // Optional. A template defining the model's structure or behavior.
	License    interface{}            `json:"license,omitempty"`    // Optional. License information for the model.
	System     string                 `json:"system,omitempty"`     // Optional. System-specific parameters or configurations.
	Parameters map[string]interface{} `json:"parameters,omitempty"` // Optional. Additional parameters for model creation.
	Messages   []Message              `json:"messages,omitempty"`   // Optional. A sequence of messages or instructions related to the model.
	Stream     *bool                  `json:"stream,omitempty"`     // Optional. Indicates if the creation process should be streamed.
	Quantize   string                 `json:"quantize,omitempty"`   // Optional. Specifies quantization settings for the model.
}

CreateModelRequest represents the payload for creating a new model on the server. It includes various configuration options and parameters for the model creation process.

type CreateModelResult

type CreateModelResult struct {
	// A list of messages detailing the status or progress of the creation operation.
	StatusMessages []string `json:"status_messages"`
}

CreateModelResult encapsulates the outcome of a model creation operation, including any status messages returned during the process.

type DeleteModelRequest

type DeleteModelRequest struct {
	// Model specifies the unique identifier or name of the model to be deleted.
	Model string `json:"model"`
}

DeleteModelRequest represents the request payload for deleting a model from the server. It contains the identifier of the model that should be removed.

type DeleteModelResult

type DeleteModelResult struct {
	// StatusMessages holds any messages returned by the server regarding the deletion operation.
	StatusMessages []string `json:"status_messages"`
}

DeleteModelResult encapsulates the response returned by the server after attempting to delete a model. It includes a slice of status messages that describe the outcome of the operation.

type EmbedResult

type EmbedResult struct {
	Model     string      `json:"model"`      // The identifier of the model used to generate the embedding.
	CreatedAt time.Time   `json:"created_at"` // The timestamp indicating when the embedding was created.
	Embedding interface{} `json:"embedding"`  // The actual embedding data; its structure depends on the model's output.
}

EmbedResult represents the response from an embedding operation. It includes details about the model used, the creation timestamp, and the resulting embedding data.

type Message

type Message struct {
	// Role specifies the origin of the message, for example, "user" or "assistant".
	Role string `json:"role"`
	// Content holds the main textual content of the message.
	Content string `json:"content"`

	// Images contains an optional list of image URLs associated with the message.
	Images []string `json:"images,omitempty"`
	// ToolCalls holds an optional list of tool call details, where each tool call is represented as a map.
	ToolCalls []map[string]interface{} `json:"tool_calls,omitempty"`
}

Message represents a single message in a conversation, either sent by the user or generated by the assistant. It encapsulates the sender's role, the textual content of the message, and optional metadata such as images and tool call details.

type ModelDetails

type ModelDetails struct {
	Format            string   `json:"format"`             // The storage format of the model (e.g., ONNX, TensorFlow, PyTorch).
	Family            string   `json:"family"`             // The primary category or group the model belongs to (e.g., Vision, NLP).
	Families          []string `json:"families,omitempty"` // Additional categories or groups the model is associated with.
	ParameterSize     string   `json:"parameter_size"`     // The total number of parameters in the model, often indicative of its complexity.
	QuantizationLevel string   `json:"quantization_level"` // The degree of quantization applied to the model, affecting its size and performance.
}

ModelDetails encapsulates detailed attributes of a machine learning model, providing insights into its format, family, parameter size, and quantization level.

type ModelInfo

type ModelInfo struct {
	Name       string       `json:"name"`        // The unique identifier or name of the model.
	ModifiedAt time.Time    `json:"modified_at"` // The timestamp indicating the last modification time of the model.
	Size       int64        `json:"size"`        // The size of the model file in bytes.
	Digest     string       `json:"digest"`      // The checksum or hash digest of the model file for integrity verification.
	Details    ModelDetails `json:"details"`     // An embedded struct containing detailed attributes of the model.
}

ModelInfo provides metadata about a specific machine learning model, including its name, last modification timestamp, file size, checksum digest, and detailed attributes.

type ModelInfoResult

type ModelInfoResult struct {
	Modelfile  string                 `json:"modelfile"`  // The content or path of the model file associated with the model.
	Parameters string                 `json:"parameters"` // A string representation of the model's parameters.
	Template   string                 `json:"template"`   // The template used for the model, possibly indicating its architecture or purpose.
	Details    map[string]interface{} `json:"details"`    // A map containing additional detailed information about the model.
	ModelInfo  map[string]interface{} `json:"model_info"` // A map containing metadata or attributes specific to the model.
}

ModelInfoResult represents the structure of the response returned by the FetchModelInfo method. It includes various details about a model, such as its configuration, parameters, and additional metadata.

type ModelList

type ModelList struct {
	Models []ModelInfo `json:"models"` // A slice containing information about each available model.
}

ModelList represents a collection of machine learning models, serving as a container for multiple ModelInfo entries.

type ModelProcessStatus

type ModelProcessStatus struct {
	Models []string `json:"models"` // A slice of strings, each representing a model name.
}

ModelProcessStatus represents the JSON structure returned by the server, containing a list of model names currently being processed.

type ModelResponse

type ModelResponse struct {
	// Model specifies which language model generated the response.
	Model string `json:"model"`
	// CreatedAt records the timestamp when the response was created.
	CreatedAt time.Time `json:"created_at"`
	// Message contains the actual message content generated by the model.
	Message Message `json:"message"`
	// Done indicates whether the model has completed generating the response.
	Done bool `json:"done"`
	// DoneReason provides an optional explanation of why the response generation has stopped.
	DoneReason string `json:"done_reason,omitempty"`
	// TotalDuration measures the total time taken to generate the response (in milliseconds or a similar unit).
	TotalDuration int64 `json:"total_duration,omitempty"`
	// LoadDuration represents the time taken to load the model or necessary data for generation.
	LoadDuration int64 `json:"load_duration,omitempty"`
	// PromptEvalCount counts the number of prompt evaluations performed during response generation.
	PromptEvalCount int `json:"prompt_eval_count,omitempty"`
	// PromptEvalDuration captures the total time spent evaluating the prompt.
	PromptEvalDuration int64 `json:"prompt_eval_duration,omitempty"`
	// EvalCount indicates the number of evaluations executed during the generation process.
	EvalCount int `json:"eval_count,omitempty"`
	// EvalDuration records the time spent in the evaluation phase of generating the response.
	EvalDuration int64 `json:"eval_duration,omitempty"`
}

ModelResponse represents the response returned by the language model after processing a chat request. It includes metadata about the response as well as the message generated by the model.

type PromptInfo

type PromptInfo struct {
	Model    string `json:"model"`              // The model to be used for prompt generation.
	Prompt   string `json:"prompt,omitempty"`   // The prompt text; optional field.
	Suffix   string `json:"suffix,omitempty"`   // Suffix to append to the prompt; optional field.
	System   string `json:"system,omitempty"`   // System context or instructions; optional field.
	Template string `json:"template,omitempty"` // Template to structure the prompt; optional field.

	Images  []string               `json:"images,omitempty"`  // List of images associated with the prompt; optional field.
	Format  interface{}            `json:"format,omitempty"`  // Format of the prompt; can be string or map; optional field.
	Options map[string]interface{} `json:"options,omitempty"` // Additional options for prompt customization; optional field.

	Stream *bool `json:"stream,omitempty"` // Flag to indicate streaming response; optional field.
	Raw    *bool `json:"raw,omitempty"`    // Flag to indicate raw response; optional field.

	KeepAlive string      `json:"keep_alive,omitempty"` // Connection keep-alive duration; optional field.
	Context   interface{} `json:"context,omitempty"`    // Contextual information for the prompt; optional field.
}

PromptInfo represents the structure of a prompt request, including various parameters to customize the prompt generation.

func (*PromptInfo) ValidatePromptInfo

func (req *PromptInfo) ValidatePromptInfo() error

ValidatePromptInfo validates the fields of the PromptInfo struct, ensuring that fields like Format and Context have appropriate types.

type PromptResult

type PromptResult struct {
	Model      string      `json:"model"`                 // The model used for prompt generation.
	Response   string      `json:"response"`              // The generated response.
	CreatedAt  time.Time   `json:"created_at"`            // Timestamp of when the prompt was created.
	Context    interface{} `json:"context,omitempty"`     // Contextual information associated with the prompt; optional field.
	Done       bool        `json:"done"`                  // Flag indicating if the prompt processing is complete.
	DoneReason string      `json:"done_reason,omitempty"` // Reason for completion; optional field.

	TotalDuration      int64 `json:"total_duration,omitempty"`       // Total time taken for prompt processing; optional field.
	LoadDuration       int64 `json:"load_duration,omitempty"`        // Time taken to load resources; optional field.
	PromptEvalCount    int   `json:"prompt_eval_count,omitempty"`    // Number of prompt evaluations; optional field.
	PromptEvalDuration int64 `json:"prompt_eval_duration,omitempty"` // Duration of prompt evaluations; optional field.
	EvalCount          int   `json:"eval_count,omitempty"`           // Number of evaluations performed; optional field.
	EvalDuration       int64 `json:"eval_duration,omitempty"`        // Duration of evaluations; optional field.
}

PromptResult represents the structure of a prompt response, containing details about the generated prompt and its processing metrics.

type PullModelResult

type PullModelResult struct {
	StatusMessages []string `json:"status_messages"` // List of status messages returned from the model pull operation.
}

PullModelResult represents the structure of the response received after initiating a model pull request, containing status messages.

type PushModelResult

type PushModelResult struct {
	StatusMessages []string `json:"status_messages"` // List of status messages returned from the model push operation.
}

PushModelResult represents the structure of the response received after initiating a model push request, containing status messages.

type Version

type Version struct {
	Version   string `json:"version"`              // The version string of the application.
	BuildTime string `json:"build_time,omitempty"` // The build time of the application, optional field.
}

Version represents the structure of the version information returned by the API, including the version string and optional build time.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL