Skip to content

OpenAIChatCompletionService as both ITextCompletion and IChatCompletion throws error when executing Semantic functions #1050

@Kevdome3000

Description

@Kevdome3000

Describe the bug

Registering OpenAIChatCompletionService as both ITextCompletion and IChatCompletion does not work and it throws the following error:

Something went wrong while rendering the semantic function or while executing the text completion. Function: Prompts.SystemResponse Error: Invalid request: The request is not valid, HTTP status: 404. Details: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?
      Status: 404 (Not Found)
      
      Content:
      {
        "error": {
          "message": "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?",
          "type": "invalid_request_error",
          "param": "model",
          "code": null
        }
      }

I have to also register an ITextCompletion service for the kernel to execute Semantic functions properly but the text completion with the text-devinci-003 model does not work properly, likely because it's not smart enough. For my use case, it returns invalid JSON most of the time.

To Reproduce

skprompt.txt:

Example input:
You are an AI assistant that helps people find information.

Example output:
{
    "name": "Helpful AI",
    "role": "I help people find information.",
    "thoughts": [ "I need to find information for people.", "What information do people need?", "How can I find information for people?" ]
}

SystemMessage: {{$systemPrompt}}
 
Respond only with the output in the exact format specified in the Example output, with no explanation or conversation.
Ensure your response is fully valid JSON.

test method:

public static async Task RunAsync()
    {
        var builder = Kernel.Builder
            .WithLogger(ConsoleLogger.Log)
            .Configure(c => c.SetDefaultHttpRetryConfig(new HttpRetryConfig
            {
                MaxRetryCount = 3,
                UseExponentialBackoff = true
            }))
            .Configure(config =>
            {
                config.AddOpenAIChatCompletionService("gpt-4", Env.Var("OPENAI_API_KEY"));
            });
            
        var kernel = builder.Build();
    
         kernel.ImportSemanticSkillFromDirectory(RepoFiles.SkillsPath(), "Prompts");
        
        var context = kernel.CreateNewContext();
        context.Variables.Set("systemPrompt", prompt);
        var systemPrompt = kernel.Func("Prompts", "SystemMessage");
        
        var result = await kernel.RunAsync(context.Variables, systemPrompt);
       
        Console.WriteLine(result.Result);
    
        if (result.ErrorOccurred)
        {
            return;
        }
            
        
        var systemResponse = JsonSerializer.Deserialize<SystemResponse>(result.Result);
        
        var systemName = systemResponse?.Name;
        var systemRole = systemResponse?.Role;
        List<string>? systemThoughts = systemResponse?.Thoughts;
    
        var sb = new StringBuilder();
        sb.AppendLine($"Name: {systemName}");
        sb.AppendLine($"Role: {systemRole}");
    
        for (var i = 0; i < systemThoughts?.Count; i++)
        {
            sb.AppendLine($"Thought {i + 1}: {systemThoughts[i]}");
        }
    
        Console.WriteLine(sb.ToString());
    
  }

Expected behavior

I should be able to register OpenAIChatCompletionService as both ITextCompletion and IChatCompletion and run Semantic Functions via the Kernel

Screenshots

Entire sample code with comments:

PromptTemplate:

Example input:
You are an AI assistant that helps people find information.

Example output:
{
    "name": "Helpful AI",
    "role": "I help people find information.",
    "thoughts": [ "I need to find information for people.", "What information do people need?", "How can I find information for people?" ]
}

SystemMessage: {{$systemPrompt}}
 
Respond only with the output in the exact format specified in the Example output, with no explanation or conversation.
Ensure your response is fully valid JSON.

Sample class:

public static class Example01_GenerateSystemResponse
{

    public static async Task RunAsync()
    {
        var builder = Kernel.Builder
            .WithLogger(ConsoleLogger.Log)
            .Configure(c => c.SetDefaultHttpRetryConfig(new HttpRetryConfig
            {
                MaxRetryCount = 3,
                UseExponentialBackoff = true
            }))
            .Configure(config =>
            {
                // I_SHOULDN'T_HAVE_TO_ADD_A_SEPARATE_TEXT_COMPLETION_SERVICE_FOR_THE_CHAT_SERVICE_TO_WORK, BUT_IF_I_DON'T, I_GET_AN_ERROR.
                config.AddOpenAITextCompletionService("text-davinci-003", Env.Var("OPENAI_API_KEY"));
                config.AddOpenAIChatCompletionService("gpt-4", Env.Var("OPENAI_API_KEY"), "", false);
            });
            
        var kernel = builder.Build();
    
        await RunWithKernel(kernel);
        await RunWithPromptGenerator(kernel);
    
    }
    
    // DOES_NOT_WORK_CONSISTENTLY
    public static async Task RunWithKernel(IKernel kernel)
    {
        kernel.ImportSemanticSkillFromDirectory(RepoFiles.SkillsPath(), "Prompts");
        
        var context = kernel.CreateNewContext();
        context.Variables.Set("systemPrompt", prompt);
        var systemPrompt = kernel.Func("Prompts", "SystemMessage");
        
        // THIS_WORKS_OCCASIONALLY, BUT_FAILS_WITH_INVALID_JSON_MOST_OF_THE_TIME
        var result = await kernel.RunAsync(context.Variables, systemPrompt);
        
        // THIS_CONSISTENTLY_FAILS_AS_IT_RETURNS_INVALID_JSON 
        // var result = await systemPrompt.InvokeAsync(context);
        
        Console.WriteLine(result.Result);
    
        if (result.ErrorOccurred)
        {
            return;
        }
            
        
        var systemResponse = JsonSerializer.Deserialize<SystemResponse>(result.Result);
        
        var systemName = systemResponse?.Name;
        var systemRole = systemResponse?.Role;
        List<string>? systemThoughts = systemResponse?.Thoughts;
    
        var sb = new StringBuilder();
        sb.AppendLine($"Name: {systemName}");
        sb.AppendLine($"Role: {systemRole}");
    
        for (var i = 0; i < systemThoughts?.Count; i++)
        {
            sb.AppendLine($"Thought {i + 1}: {systemThoughts[i]}");
        }
    
        Console.WriteLine(sb.ToString());
    }
    
    // THIS_WORKS_CONSISTENTLY, BUT_HAVE_TO_USE_A_PROMPT_GENERATOR_TO_GET_THE_RESULT.
    public static async Task RunWithPromptGenerator(IKernel kernel)
    {
        var promptGenerator = new PromptGenerator(kernel);
    
        promptGenerator.AddPromptTemplate("SystemMessage", Path.Combine(RepoFiles.SkillsPath(), "Prompts"));
    
        var context = kernel.CreateNewContext();
        context.Variables.Set("systemPrompt", prompt);
        
        var systemPrompt = await promptGenerator.GeneratePromptAsync("SystemMessage", context);
        
        var chatGPT = kernel.GetService<IChatCompletion>();
        var chatHistory = chatGPT.CreateNewChat(systemPrompt);
        var result = await chatGPT.GenerateMessageAsync(chatHistory);
    
        Console.WriteLine(result);
       
        var systemResponse = JsonSerializer.Deserialize<SystemResponse>(result);
        var systemName = systemResponse.Name;
        var systemRole = systemResponse.Role;
        List<string>? systemThoughts = systemResponse.Thoughts;
        
        Console.WriteLine("======== System Response ========");
        var sb = new StringBuilder();
        sb.AppendLine($"Name: {systemName}");
        sb.AppendLine($"Role: {systemRole}");
    
        for (var i = 0; i < systemThoughts.Count; i++)
        {
            sb.AppendLine($"Thought {i + 1}: {systemThoughts[i]}");
        }
    
        Console.WriteLine(sb.ToString());
    }
}

Desktop (please complete the following information):

  • OS: Mac OSX
  • IDE: JetBrains Rider
  • I'm using source

Additional context

PromptGenerator just stores a Dictionary<string, SemanticFunctionConfig> and a Method:

 public async Task<string> GeneratePromptAsync(string promptName, SKContext executionContext)
  {
      if (!_semanticFunctions.ContainsKey(promptName))
      {
          throw new ArgumentException($"Prompt '{promptName}' not found.");
      }
          
      var functionConfig = _semanticFunctions[promptName];
      var template = functionConfig.PromptTemplate;
          
      return await template.RenderAsync(executionContext).ConfigureAwait(false);
  }

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions