Implements

  • LanguageModel

Constructors

  • Parameters

    • id: string

      the unique id for this language model. It will be used to identify the model in the UI.

    • model: string

      the unique model name as used in the Ollama environment.

    • host: (() => undefined | string)
        • (): undefined | string
        • Returns undefined | string

    • Optional tokenUsageService: TokenUsageService

    Returns OllamaModel

Properties

DEFAULT_REQUEST_SETTINGS: Partial<Omit<ChatRequest, "stream" | "model">> = ...
host: (() => undefined | string)

Type declaration

    • (): undefined | string
    • Returns undefined | string

id: string

the unique id for this language model. It will be used to identify the model in the UI.

model: string

the unique model name as used in the Ollama environment.

providerId: "ollama" = 'ollama'
tokenUsageService?: TokenUsageService
vendor: string = 'Ollama'

Methods

  • Check if the Ollama server supports thinking.

    Use the Ollama 'show' request to get information about the model, so we can check the capabilities for the 'thinking' capability.

    Parameters

    • ollama: Ollama

      The Ollama client instance.

    • model: string

      The name of the Ollama model.

    Returns Promise<boolean>

    A boolean indicating whether the Ollama model supports thinking.

  • Parameters

    • ollama: Ollama
    • ollamaRequest: ExtendedChatRequest
    • structured: boolean
    • Optional cancellation: CancellationToken

    Returns Promise<LanguageModelResponse>

  • Retrieves the settings for the chat request, merging the request-specific settings with the default settings.

    Parameters

    • request: LanguageModelRequest

      The language model request containing specific settings.

    Returns Partial<ChatRequest>

    A partial ChatRequest object containing the merged settings.

  • Parameters

    • ollama: Ollama
    • chatRequest: ExtendedNonStreamingChatRequest
    • Optional cancellation: CancellationToken

    Returns Promise<LanguageModelResponse>

  • Parameters

    • ollama: Ollama
    • chatRequest: ExtendedChatRequest
    • Optional cancellation: CancellationToken

    Returns Promise<LanguageModelStreamResponse>

  • Parameters

    • request: LanguageModelRequest
    • Optional cancellationToken: CancellationToken

    Returns Promise<LanguageModelResponse>