the unique id for this language model. It will be used to identify the model in the UI.
the model id as it is used by the OpenAI API
whether the streaming API shall be used
a function that returns the API key to use for this model, called on each request
a function that returns the OpenAPI version to use for this model, called on each request
the OpenAI API compatible endpoint where the model is hosted. If not provided the default OpenAI endpoint will be used.
how to handle system messages
the maximum number of retry attempts when a request fails
whether to use the newer OpenAI Response API instead of the Chat Completion API
Optionalproxy: stringOptionalreasoningSupport: ReasoningSupporta function that returns the API key to use for this model, called on each request
a function that returns the OpenAPI version to use for this model, called on each request
how to handle system messages
whether the streaming API shall be used
Readonlyidthe unique id for this language model. It will be used to identify the model in the UI.
the maximum number of retry attempts when a request fails
the model id as it is used by the OpenAI API
OptionalproxyOptionalreasoningProtectedrunnerThe options for the OpenAI runner.
the OpenAI API compatible endpoint where the model is hosted. If not provided the default OpenAI endpoint will be used.
whether to use the newer OpenAI Response API instead of the Chat Completion API
ProtectedcreateProtectedgetReasoning-level translation lives in openAiReasoningFor.
ProtectedhandleOptionalcancellationToken: CancellationTokenProtectedhandleProtectedhandleOptionalcancellationToken: CancellationTokenProtectedhandleProtectedinitializeProtectedisProtectedprocessOptionalcancellationToken: CancellationToken
See also VS Code
ILanguageModelChatMetadata