Skip to content

Lm invoker

Defines a base class for language model invokers used in Gen AI applications.

Authors

Henry Wicaksono (henry.wicaksono@gdplabs.id)

References

NONE

BaseLMInvoker(model_id, default_hyperparameters=None, supported_attachments=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, simplify_events=False)

Bases: ABC

A base class for language model invokers used in Gen AI applications.

The BaseLMInvoker class provides a framework for invoking language models. It handles both standard and streaming invocation.

Attributes:

Name Type Description
model_id str

The model ID of the language model.

model_provider str

The provider of the language model.

model_name str

The name of the language model.

default_hyperparameters dict[str, Any]

Default hyperparameters for invoking the language model.

tools list[Tool]

Tools provided to the language model to enable tool calling.

response_schema ResponseSchema | None

The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary.

output_analytics bool

Whether to output the invocation analytics.

retry_config RetryConfig

The retry configuration for the language model.

Initializes a new instance of the BaseLMInvoker class.

Parameters:

Name Type Description Default
model_id ModelId

The model ID of the language model.

required
default_hyperparameters dict[str, Any] | None

Default hyperparameters for invoking the language model. Defaults to None, in which case an empty dictionary is used.

None
supported_attachments set[str] | None

A set of supported attachment types. Defaults to None, in which case an empty set is used (indicating that no attachments are supported).

None
tools list[Tool | Tool] | None

Tools provided to the model to enable tool calling. Defaults to None, in which case an empty list is used.

None
response_schema ResponseSchema | None

The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None.

None
output_analytics bool

Whether to output the invocation analytics. Defaults to False.

False
retry_config RetryConfig | None

The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used.

None
simplify_events bool

Temporary parameter to control the streamed events format. When True, uses the simplified events format. When False, uses the legacy events format for backward compatibility. Will be removed in v0.6. Defaults to False.

False

batch: BatchOperations property

The batch operations for the language model.

Returns:

Name Type Description
BatchOperations BatchOperations

The batch operations for the language model.

model_id: str property

The model ID of the language model.

Returns:

Name Type Description
str str

The model ID of the language model.

model_name: str property

The name of the language model.

Returns:

Name Type Description
str str

The name of the language model.

model_provider: str property

The provider of the language model.

Returns:

Name Type Description
str str

The provider of the language model.

clear_response_schema()

Clears the response schema for the language model.

This method clears the response schema for the language model by calling the set_response_schema method with None.

clear_tools()

Clears the tools for the language model.

This method clears the tools for the language model by calling the set_tools method with an empty list.

invoke(messages, hyperparameters=None, event_emitter=None) async

Invokes the language model.

This method validates the messages and invokes the language model. It handles both standard and streaming invocation. Streaming mode is enabled if an event emitter is provided. The method includes retry logic with exponential backoff for transient failures.

Parameters:

Name Type Description Default
messages LMInput

The input messages for the language model. 1. If a list of Message objects is provided, it is used as is. 2. If a list of MessageContent or a string is provided, it is converted into a user message.

required
hyperparameters dict[str, Any] | None

A dictionary of hyperparameters for the language model. Defaults to None, in which case the default hyperparameters are used.

None
event_emitter EventEmitter | None

The event emitter for streaming tokens. If provided, streaming invocation is enabled. Defaults to None.

None

Returns:

Type Description
str | LMOutput

str | LMOutput: The generated response from the language model.

Raises:

Type Description
CancelledError

If the invocation is cancelled.

ModelNotFoundError

If the model is not found.

ProviderAuthError

If the model authentication fails.

ProviderInternalError

If the model internal error occurs.

ProviderInvalidArgsError

If the model parameters are invalid.

ProviderOverloadedError

If the model is overloaded.

ProviderRateLimitError

If the model rate limit is exceeded.

TimeoutError

If the invocation times out.

ValueError

If the messages are not in the correct format.

set_response_schema(response_schema)

Sets the response schema for the language model.

This method sets the response schema for the language model. Any existing response schema will be replaced.

Parameters:

Name Type Description Default
response_schema ResponseSchema | None

The response schema to be used.

required

set_tools(tools)

Sets the tools for the language model.

This method sets the tools for the language model. Any existing tools will be replaced.

Parameters:

Name Type Description Default
tools list[Tool | Tool]

The list of tools to be used.

required

InputType

Defines valid input types in LM invokers JSON schema.

Key

Defines valid keys in LM invokers JSON schema.