Invoke a tool listed in lm.tools by name with the given input. The input will be validated against
the schema declared by the tool
A tool can be invoked by a chat participant, in the context of handling a chat request, or globally by any extension in
any custom flow.
In the former case, the caller shall pass the
toolInvocationToken, which comes from a
chat request. This makes sure the chat UI shows the tool invocation for the
correct conversation.
A tool result is an array of text- and
prompt-tsx-parts. If the tool caller is using @vscode/prompt-tsx, it can
incorporate the response parts into its prompt using a ToolResult. If not, the parts can be passed along to the
LanguageModelChat via a user message with a LanguageModelToolResultPart.
If a chat participant wants to preserve tool results for requests across multiple turns, it can store tool results in
the ChatResult.metadata returned from the handler and retrieve them on the next turn from
ChatResponseTurn.result.
Invoke a tool listed in lm.tools by name with the given input. The input will be validated against the schema declared by the tool
A tool can be invoked by a chat participant, in the context of handling a chat request, or globally by any extension in any custom flow.
In the former case, the caller shall pass the toolInvocationToken, which comes from a chat request. This makes sure the chat UI shows the tool invocation for the correct conversation.
A tool result is an array of text- and prompt-tsx-parts. If the tool caller is using
@vscode/prompt-tsx
, it can incorporate the response parts into its prompt using aToolResult
. If not, the parts can be passed along to the LanguageModelChat via a user message with a LanguageModelToolResultPart.If a chat participant wants to preserve tool results for requests across multiple turns, it can store tool results in the ChatResult.metadata returned from the handler and retrieve them on the next turn from ChatResponseTurn.result.