-
Notifications
You must be signed in to change notification settings - Fork 849
Add HostedFile/VectorStoreContent, HostedFileSearchTool, and HostedCodeInterpreterTool.Inputs #6620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Proposal: CodeExecutionResultContentContextThe Microsoft.Extensions.AI.Abstractions library needed a unified DecisionImplemented Design Investigation & RationaleProvider Research FindingsInvestigated code execution response formats from four major AI providers: 1. Anthropic ClaudeResponse Format: {
"role": "assistant",
"container": {
"id": "container_011CPR5CNjB747bTd36fQLFk",
"expires_at": "2025-05-23T21:13:31.749448Z"
},
"content": [
{
"type": "text",
"text": "I'll calculate the mean and standard deviation for you."
},
{
"type": "server_tool_use",
"id": "srvtoolu_01A2B3C4D5E6F7G8H9I0J1K2",
"name": "code_execution",
"input": {
"code": "import numpy as np\ndata = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\nmean = np.mean(data)\nstd = np.std(data)\nprint(f\"Mean: {mean}\")\nprint(f\"Standard deviation: {std}\")"
}
},
{
"type": "code_execution_tool_result",
"tool_use_id": "srvtoolu_01A2B3C4D5E6F7G8H9I0J1K2",
"content": {
"type": "code_execution_result",
"stdout": "Mean: 5.5\nStandard deviation: 2.8722813232690143\n",
"stderr": "",
"return_code": 0,
"content": [
{
"file_id": "generated_file_id",
"type": "code_execution_output"
}
]
}
},
{
"type": "text",
"text": "The mean of the dataset is 5.5 and the standard deviation is approximately 2.87."
}
],
"id": "msg_01BqK2v4FnRs4xTjgL8EuZxz",
"model": "claude-opus-4-20250514",
"stop_reason": "end_turn",
"usage": {
"input_tokens": 45,
"output_tokens": 187,
}
}Key Features:
2. Google Vertex AI/GeminiResponse Format: {
"executableCode": {
"language": "PYTHON",
"code": "total = 0\nfor i in range(1, 11):\n total += i\nprint(f'{total=}')\n"
},
"codeExecutionResult": {
"outcome": "OUTCOME_OK",
"output": "total=55\n"
}
}Key Features:
3. OpenAI Code Interpreter (Assistants API)Response Format:
Response Format: {
"id": "msg_abc123",
"object": "thread.message",
"created_at": 1698983503,
"thread_id": "thread_abc123",
"role": "assistant",
"content": [
{
"type": "text",
"text": {
"value": "Hi! How can I help you today?",
"annotations": [
{
"type": "file_path",
"end_index": "integer",
"start_index": "integer",
"file_path":
{
"file_id": "string"
}
}
]
}
}
],
"assistant_id": "asst_abc123",
"run_id": "run_abc123",
"attachments":
[
{
"file_id", "string",
"tools": [{ "type": "code_interpreter" }]
}
],
"metadata": {}
}Streaming Response Format: {
"id": "step_123",
"object": "thread.run.step.delta",
"delta": {
"step_details": {
"type": "tool_calls",
"tool_calls": [
{
"index": 0,
"id": "call_123",
"type": "code_interpreter",
"code_interpreter":
{
"input": "string",
"outputs":
[
{
"type": "logs",
"logs": "string"
},
{
"type": "image",
"image": { "file_id", "string" }
}
]
}
}
]
}
}
}
3b. OpenAI Responses API (NEW)Response Format: {
"type": "code_interpreter_call",
"outputs": [
{
"type": "output_text",
"text": "Calculation result: 42"
},
{
"type": "output_image",
"image": "base64_encoded_image_data"
}
]
}Streaming Events: Key Features:
4. Azure AI FoundryResponse Format:
Decision-Making ProcessBased on this analysis, applied the principle of minimal proven abstractions: Properties INCLUDED (Multi-Provider Support):
Properties EXCLUDED (Not Proven Across Multiple Providers):
Note: While OpenAI Responses API does stream executed code, the
Streaming InvestigationProviders with Streaming Support:
Streaming Patterns Identified:
Final Design ImplementationCodeExecutionResultContent Class// Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Diagnostics.CodeAnalysis;
using System.Text.Json.Serialization;
namespace Microsoft.Extensions.AI;
/// <summary>
/// Represents the result of code execution in a chat.
/// </summary>
/// <remarks>
/// This content type represents the core output from executing code in various AI provider environments,
/// focusing on the essential properties: stdout, stderr, and generated content.
/// It provides a minimal unified format that accommodates the common execution result patterns
/// from providers such as Anthropic Claude, Azure AI Foundry, OpenAI Code Interpreter, and Google Vertex AI.
/// Generated files are represented using existing AIContent types like DataContent, UriContent, or HostedFileContent.
/// </remarks>
[DebuggerDisplay("{DebuggerDisplay,nq}")]
public sealed class CodeExecutionResultContent : AIContent
{
/// <summary>The standard output from code execution.</summary>
private string? _stdout;
/// <summary>The standard error output from code execution.</summary>
private string? _stderr;
/// <summary>
/// Initializes a new instance of the <see cref="CodeExecutionResultContent"/> class.
/// </summary>
/// <param name="stdout">The standard output from execution.</param>
/// <param name="stderr">The standard error output from execution.</param>
/// <param name="generatedContents">Collection of content generated during execution (images, files, etc.).</param>
public CodeExecutionResultContent(string? stdout = null, string? stderr = null, IList<AIContent>? generatedContents = null)
{
_stdout = stdout;
_stderr = stderr;
GeneratedContents = generatedContents;
}
/// <summary>
/// Initializes a new instance of the <see cref="CodeExecutionResultContent"/> class for JSON deserialization.
/// </summary>
/// <param name="stdout">The standard output from execution.</param>
/// <param name="stderr">The standard error output from execution.</param>
/// <param name="generatedContents">Collection of content generated during execution.</param>
[JsonConstructor]
public CodeExecutionResultContent(
string? stdout = null,
string? stderr = null,
IList<AIContent>? generatedContents = null)
{
_stdout = stdout;
_stderr = stderr;
GeneratedContents = generatedContents;
}
/// <summary>
/// Gets or sets the standard output from code execution.
/// </summary>
/// <remarks>
/// This contains the text output that would normally be written to stdout during code execution,
/// such as print statements, successful computation results, etc.
/// </remarks>
[AllowNull]
public string Stdout
{
get => _stdout ?? string.Empty;
set => _stdout = value;
}
/// <summary>
/// Gets or sets the standard error output from code execution.
/// </summary>
/// <remarks>
/// This contains error messages, warnings, and other diagnostic information that would
/// normally be written to stderr during code execution.
/// </remarks>
[AllowNull]
public string Stderr
{
get => _stderr ?? string.Empty;
set => _stderr = value;
}
/// <summary>
/// Gets or sets the collection of content generated during code execution.
/// </summary>
/// <remarks>
/// This includes images, charts, data files, and other artifacts created by the executed code.
/// Generated content is represented using existing AIContent types such as DataContent for inline data,
/// UriContent for downloadable files, or HostedFileContent for provider-hosted files.
/// This property is supported by multiple providers including Google Vertex AI (inline images),
/// Azure AI Foundry (file path annotations), and OpenAI Code Interpreter (file references).
/// </remarks>
public IList<AIContent>? GeneratedContents { get; set; }
/// <summary>
/// Gets the combined text from code execution, including both stdout and stderr.
/// </summary>
/// <remarks>
/// This property combines the standard output and standard error streams into a single string.
/// If both stdout and stderr have content, stderr is appended after stdout.
/// If only one stream has content, only that content is returned.
/// </remarks>
[JsonIgnore]
public string Text
{
get
{
var hasStdout = !string.IsNullOrEmpty(Stdout);
var hasStderr = !string.IsNullOrEmpty(Stderr);
if (hasStdout && hasStderr)
{
return $"{Stdout}\n{Stderr}";
}
if (hasStdout)
{
return Stdout;
}
if (hasStderr)
{
return Stderr;
}
return string.Empty;
}
}
/// <summary>
/// Gets a value indicating whether the code execution was successful.
/// </summary>
/// <remarks>
/// This is determined by checking if there are no errors in stderr.
/// </remarks>
[JsonIgnore]
public bool IsSuccess => string.IsNullOrEmpty(Stderr);
/// <inheritdoc/>
public override string ToString() => Text;
/// <summary>Gets a string representing this instance to display in the debugger.</summary>
[DebuggerBrowsable(DebuggerBrowsableState.Never)]
private string DebuggerDisplay
{
get
{
var status = IsSuccess ? "Success" : "Failed";
var output = !string.IsNullOrEmpty(Text) ? $", Text: \"{Text.Substring(0, Math.Min(50, Text.Length))}\"" : string.Empty;
var generated = GeneratedContents?.Count > 0 ? $", Generated: {GeneratedContents.Count}" : string.Empty;
return $"CodeExecution = {status}{output}{generated}";
}
}
}AIContent.cs JsonDerivedType Addition/// <summary>Represents content used by AI services.</summary>
[JsonPolymorphic(TypeDiscriminatorPropertyName = "$type")]
[JsonDerivedType(typeof(CodeExecutionResultContent), typeDiscriminator: "codeExecutionResult")]
[JsonDerivedType(typeof(DataContent), typeDiscriminator: "data")]
[JsonDerivedType(typeof(ErrorContent), typeDiscriminator: "error")]
// ... other existing types
public class AIContent
{
// ... existing implementation
}Text Property RationaleThe
Consumer Usage ExamplesNon-Streaming Scenariousing Microsoft.Extensions.AI;
IChatClient client = // ... initialize your provider client
// Request code execution
var response = await client.GetResponseAsync([
new ChatMessage(ChatRole.User, "Calculate the mean of [1, 2, 3, 4, 5] and create a chart")
]);
// Process the response
foreach (var content in response.Message.Contents)
{
switch (content)
{
case TextContent textContent:
Console.WriteLine($"AI Response: {textContent.Text}");
break;
case CodeExecutionResultContent codeResult:
Console.WriteLine($"Code Execution Output: {codeResult.Text}");
Console.WriteLine($"Success: {codeResult.IsSuccess}");
// Handle generated files
if (codeResult.GeneratedContents != null)
{
foreach (var generated in codeResult.GeneratedContents)
{
switch (generated)
{
case DataContent dataContent:
Console.WriteLine($"Generated inline file: {dataContent.MediaType}");
// Save dataContent.Data to file
break;
case HostedFileContent hostedFile:
Console.WriteLine($"Generated hosted file: {hostedFile.FileId}");
// Download file using provider's file API
break;
case UriContent uriContent:
Console.WriteLine($"Generated file available at: {uriContent.Uri}");
// Download from URI
break;
}
}
}
break;
}
}Streaming Scenariousing Microsoft.Extensions.AI;
IChatClient client = // ... initialize your provider client
// Request streaming code execution
await foreach (var update in client.GetStreamingResponseAsync([
new ChatMessage(ChatRole.User, "Run a data analysis and generate visualizations")
]))
{
foreach (var content in update.Contents)
{
switch (content)
{
case TextContent textContent:
Console.Write(textContent.Text); // Stream text as it arrives
break;
case CodeExecutionResultContent codeResult:
// Handle incremental execution output
if (!string.IsNullOrEmpty(codeResult.Stdout))
{
Console.Write($"[STDOUT] {codeResult.Stdout}");
}
if (!string.IsNullOrEmpty(codeResult.Stderr))
{
Console.Write($"[STDERR] {codeResult.Stderr}");
}
// Handle generated content as it becomes available
if (codeResult.GeneratedContents?.Count > 0)
{
Console.WriteLine($"\n[FILES] Generated {codeResult.GeneratedContents.Count} files");
foreach (var generated in codeResult.GeneratedContents)
{
Console.WriteLine($" - {generated.GetType().Name}");
}
}
break;
}
}
}Different Content Combinations// Example 1: Successful execution with output
var successResult = new CodeExecutionResultContent(
stdout: "Calculation complete!\nMean: 3.0\nStandard deviation: 1.58",
stderr: ""
);
Console.WriteLine(successResult.Text); // Prints the stdout
Console.WriteLine(successResult.IsSuccess); // True
// Example 2: Failed execution with error
var errorResult = new CodeExecutionResultContent(
stdout: "Starting calculation...",
stderr: "NameError: name 'undefined_variable' is not defined"
);
Console.WriteLine(errorResult.Text); // Prints: "Starting calculation...\nNameError: name 'undefined_variable' is not defined"
Console.WriteLine(errorResult.IsSuccess); // False
// Example 3: Execution with generated files
var resultWithFiles = new CodeExecutionResultContent(
stdout: "Chart generated successfully",
generatedContents: [
new DataContent(chartImageBytes, "image/png"),
new DataContent(csvData, "text/csv")
]
);Streaming ScenariosBased on this investigation, identified three streaming patterns that the design accommodates: Pattern 1: Incremental Output Streaming (Anthropic/Google/OpenAI Responses Style)Real-time streaming of stdout/stderr as code executes. Supported by: Anthropic Claude, Google Vertex AI, OpenAI Responses API, Azure AI Foundry: // Provider implementation for incremental streaming
public async IAsyncEnumerable<ChatResponseUpdate> StreamCodeExecution(string code)
{
// Stream 1: Execution start
yield return new ChatResponseUpdate(ChatRole.Assistant, [
new CodeExecutionResultContent(stdout: "Starting execution...\n")
]);
// Stream 2: Incremental output
yield return new ChatResponseUpdate(ChatRole.Assistant, [
new CodeExecutionResultContent(stdout: "Processing data...\n")
]);
// Stream 3: More output
yield return new ChatResponseUpdate(ChatRole.Assistant, [
new CodeExecutionResultContent(stdout: "Generating visualization...\n")
]);
// Stream 4: Final result with generated content
yield return new ChatResponseUpdate(ChatRole.Assistant, [
new CodeExecutionResultContent(
stdout: "Execution complete!\nResults saved.\n",
generatedContents: [
new DataContent(chartImageBytes, "image/png"),
new DataContent(csvData, "text/csv")
]
)
]);
}Pattern 2: Complete Result Streaming (OpenAI Assistants Style)Single update with complete execution results. Supported by: OpenAI Assistants API (legacy): // Provider implementation for complete result streaming
public async IAsyncEnumerable<ChatResponseUpdate> StreamCodeExecution(string code)
{
// Execute code completely, then stream result
var executionResult = await ExecuteCodeAsync(code);
yield return new ChatResponseUpdate(ChatRole.Assistant, [
new CodeExecutionResultContent(
stdout: executionResult.CompleteOutput,
stderr: executionResult.ErrorOutput,
generatedContents: executionResult.GeneratedFiles?.Select(f =>
new HostedFileContent(f.FileId)).ToList()
)
]);
}Pattern 3: Mixed Content StreamingCode execution mixed with explanatory text: // Provider implementation for mixed content streaming
public async IAsyncEnumerable<ChatResponseUpdate> StreamCodeExecution(string code)
{
// Stream 1: AI explanation
yield return new ChatResponseUpdate(ChatRole.Assistant, [
new TextContent("I'll run this code to analyze your data:")
]);
// Stream 2: Code execution result
yield return new ChatResponseUpdate(ChatRole.Assistant, [
new CodeExecutionResultContent(
stdout: "Data analysis complete\nMean: 42.5, Std: 12.3",
generatedContents: [
new DataContent(chartBytes, "image/png")
]
)
]);
// Stream 3: Follow-up explanation
yield return new ChatResponseUpdate(ChatRole.Assistant, [
new TextContent("The results show a normal distribution with...")
]);
}Consumer Streaming HandlingConsumers can handle all streaming patterns uniformly: var allExecutionResults = new List<CodeExecutionResultContent>();
var allText = new StringBuilder();
await foreach (var update in client.GetStreamingResponseAsync(messages))
{
foreach (var content in update.Contents)
{
switch (content)
{
case CodeExecutionResultContent codeResult:
allExecutionResults.Add(codeResult);
// Handle incremental output
if (!string.IsNullOrEmpty(codeResult.Text))
{
Console.Write(codeResult.Text);
allText.Append(codeResult.Text);
}
// Handle generated content
if (codeResult.GeneratedContents?.Count > 0)
{
ProcessGeneratedContent(codeResult.GeneratedContents);
}
break;
case TextContent textContent:
Console.Write(textContent.Text);
break;
}
}
}
// Final combined result
var combinedResult = new CodeExecutionResultContent(
stdout: allText.ToString(),
generatedContents: allExecutionResults
.SelectMany(r => r.GeneratedContents ?? [])
.ToList()
);Benefits of This Design
Alternatives Considered
Future Considerations
|
I only see error information from anthropic. How is it represented in the others? I also find it a bit odd calling it stdout/stderr, as that presumes a particular mode of execution. Could error information not be conveyed using ErrorContent in the AIContent collection? And could textual output not just be TextContent in the AIContent collection, along with any other generated output? Then a Text property could just do what ChatResponse/ChatMessage/etc. do, which is have Text just concat any TextContent in their collection. That breeds: public sealed class CodeExecutionResultContent : AIContent
{
public IList<AIContent>? GeneratedContents { get; set; }
[JsonIgnore]
public string Text { get; }
}Is that sufficient / sufficiently flexible?
Presumably we'd just store the code as DataContent (with an appropriate mime type for language) in GeneratedContents? How would we handle streaming? I believe the code itself can be streamed as it's generated... would we have distinct DataContent objects for each part, or would we hold onto that content until we can reassemble it, ala function call arguments? |
417f7db to
8f17e7f
Compare
I think this is a good point, StdErr can use the
IMHO, too much flexible, we definitely could, my initial thought was that, since we have a dedicated type that allow us to make clear distinction between what is a This extra flexible approach (using just GeneratedArtifacs) can be problematic if you want to iterate in the contents and need to know what is a code execution block from what is a file/artifact generated information which can happen more often in streaming handling code. As an alternative we may have try
Ideally as far I see in the OpenAI Responses, we have the tool_id, which I probably missed in the design, we should consider having an identifier for the tool, this can be very useful in streaming multiple code executions. |
I was imagining that would be TextContent vs everything else, e.g. there's a distinction between TextContent and DataContent(..., "text/plain"). TextContent would be the tool output, and everything else is a DataContent with an appropriate media type. |
|
@rogerbarreto, I've started looking at this more, and I'm seeing what look like discrepancies from the AI generated analysis you shared. OpenAI Assistants appears to provide not only the code that was generated but also any inputs generated to that code. Anthropic does provide the generated code. Can you double-check some of your analysis? |
src/Libraries/Microsoft.Extensions.AI.OpenAI/OpenAIAssistantChatClient.cs
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.OpenAI/OpenAIAssistantsChatClient.cs
Outdated
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.OpenAI/OpenAIAssistantsChatClient.cs
Outdated
Show resolved
Hide resolved
I double checked, missed the details in the doc, they are quite hidden in the expandables. Provided an updated information. |
We can start with this assumption, so any generated content is a |
|
One thing to note after having to deal with the This adds 3 level hierarchy to the repo (FileHostedContent -> HostedContent -> AIContent), not sure how I feel about it though or we just use the Note: VectorSearch actually is a HostedService not a "Content", but we might consider the reference to it the "content", thoughts? |
How would an implementation distinguish whether an incoming ID was for a file or a store?
In what situation would someone want to use the base type directly (if the only benefit of the base type is avoiding duplicating an Id property, it's not worth it, it needs to have some polymorphic use)?
I'm not understanding the distinction being made. VectorSearch as a tool wouldn't be an AIContent, it'd be an AITool. And VectorStore as content is because the content is the ID / reference / whatever data is being passed around. |
It does, actually. In the response where it provides "input" to the code execution tool, that's the code. |
It was not clear for me if this was the input message (Natural language) for generating the code or the code itself to be executed. |
If you are the caller, when dealing with the
I mispelled "VectorSeach" for "VectorStore", agree, happy to go forward as VectorStore as content. |
src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/HostedFileContent.cs
Outdated
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/HostedFileContent.cs
Outdated
Show resolved
Hide resolved
6f36b96 to
8670dfc
Compare
src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/HostedFileStoreContent.cs
Outdated
Show resolved
Hide resolved
8670dfc to
6c28e39
Compare
src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/CodeExecutionResultContent.cs
Outdated
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/HostedFileContent.cs
Show resolved
Hide resolved
6c28e39 to
7390afd
Compare
|
@rogerbarreto, other than the design of the code interpreter content type, how are you feeling about everything else in this PR? Does the rest of it look good to you design-wise? I'm wondering if we should get everything else merged, and then work on how we want to represent output content from all of these tools. |
|
@stephentoub I'm happy with the current state representing the inputs/definition of the tooling. Happy to progress into the output content in later additions
Thinking in a name for the contents that can be used both for input / output scenarios. For example, a code I want to be executed server-side by the tool. |
|
I've removed the CodeInterpreterResultContent from this PR, though I left some of the support that will be necessary to add it back (e.g. changes in the coalescing code). Ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for hosted file and vector store content types, along with new tools for file search and enhanced code interpreter functionality. It introduces new content types for representing files and vector stores that are hosted by AI services, as well as extending existing tools with input capabilities.
- Adds
HostedFileContentandHostedVectorStoreContentclasses for representing AI service-hosted resources - Introduces
HostedFileSearchToolfor file search operations with configurable inputs and result limits - Extends
HostedCodeInterpreterToolwith anInputsproperty to support file attachments
Reviewed Changes
Copilot reviewed 17 out of 17 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| Contents/HostedFileContent.cs | New content type for AI service-hosted files referenced by ID |
| Contents/HostedVectorStoreContent.cs | New content type for AI service-hosted vector stores referenced by ID |
| HostedFileSearchTool.cs | New tool for file search operations with input files and result count limits |
| HostedCodeInterpreterTool.cs | Added Inputs property to support file attachments for code execution |
| OpenAIAssistantsChatClient.cs | Implementation of new hosted tools for OpenAI Assistants API |
| OpenAIResponsesChatClient.cs | Implementation support for new content types in OpenAI Responses API |
| OpenAIChatClient.cs | Content conversion support for hosted file content |
| ChatResponseExtensions.cs | Enhanced text content coalescing algorithm |
| Test files | Comprehensive unit tests for new classes and integration tests |
src/Libraries/Microsoft.Extensions.AI.OpenAI/OpenAIResponsesChatClient.cs
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.Abstractions/ChatCompletion/ChatResponseExtensions.cs
Show resolved
Hide resolved
ef953a0 to
6dd777b
Compare
2c8f92e to
f1f6afa
Compare
src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/HostedFileContent.cs
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.OpenAI/OpenAIResponsesChatClient.cs
Show resolved
Hide resolved
eavanvalkenburg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small nit
...Libraries/Microsoft.Extensions.AI.Abstractions.Tests/Contents/HostedFileStoreContentTests.cs
Outdated
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/AIContent.cs
Outdated
Show resolved
Hide resolved
…, and HostedCodeInterpreterTool.Inputs
f1f6afa to
1732aac
Compare
…deInterpreterTool.Inputs (dotnet#6620) * Add HostedFileContent, HostedVectorStoreContent, HostedFileSearchTool, and HostedCodeInterpreterTool.Inputs
* Update Azure.AI.OpenAI test dependency to 2.3.0-beta.1 (#6698) * Bring back per library CHANGELOGS for M.E.AI (#6697) * Revert "Delete M.E.AI changelog files (#6467)" This reverts commit 2ab21ec. * Bring back per library CHANGELOGS for M.E.AI By popular demand. * Fix typos * Add HostedFile/VectorStoreContent, HostedFileSearchTool, and HostedCodeInterpreterTool.Inputs (#6620) * Add HostedFileContent, HostedVectorStoreContent, HostedFileSearchTool, and HostedCodeInterpreterTool.Inputs
Closes #6614
cc: @rogerbarreto
Microsoft Reviewers: Open in CodeFlow