Lm Invoker
Modules concerning the language model invokers used in Gen AI applications.
AnthropicLMInvoker(model_name, api_key=None, model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, thinking=False, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: BaseLMInvoker
A language model invoker to interact with Anthropic language models.
Examples:
lm_invoker = AnthropicLMInvoker(model_name="claude-sonnet-4-5")
result = await lm_invoker.invoke("Hi there!")
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Document c. Image
- Structured output
- Tool calling
- Native tools a. Code interpreter b. Skill c. Web search
- Thinking
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer c. Batch invocation
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the language model. |
client |
AsyncAnthropic
|
The Anthropic client instance. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Tool]
|
Tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig
|
The retry configuration for the language model. |
thinking |
ThinkingConfig
|
The thinking configuration for the language model. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes the AnthropicLmInvoker instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
The name of the Anthropic language model. |
required |
api_key
|
str | None
|
The Anthropic API key. Defaults to None, in which case the
|
None
|
model_kwargs
|
dict[str, Any] | None
|
Additional keyword arguments for the Anthropic client. |
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None. |
None
|
tools
|
list[LMTool] | None
|
Tools provided to the model to enable tool calling. Defaults to None, in which case an empty list is used. |
None
|
response_schema
|
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used. |
None
|
thinking
|
bool | ThinkingConfig
|
A boolean or ThinkingConfig object to configure thinking. Defaults to False. |
False
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
Raises:
| Type | Description |
|---|---|
ValueError
|
if |
batch
cached
property
The batch operations for the language model.
Returns:
| Name | Type | Description |
|---|---|---|
AnthropicBatchOperations |
AnthropicBatchOperations
|
The batch operations for the language model. |
skill
cached
property
The skill operations for the language model.
Returns:
| Name | Type | Description |
|---|---|---|
AnthropicSkillOperations |
AnthropicSkillOperations
|
The skill operations for the language model. |
AzureOpenAILMInvoker(azure_endpoint, azure_deployment, api_key=None, model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, thinking=False, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: OpenAILMInvoker
A language model invoker to interact with Azure OpenAI language models.
Examples:
lm_invoker = AzureOpenAILMInvoker(
azure_endpoint="https://<your-azure-openai-endpoint>.openai.azure.com/openai/v1",
azure_deployment="<your-azure-openai-deployment>",
)
result = await lm_invoker.invoke("Hi there!")
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Document c. Image
- Structured output
- Tool calling
- Thinking
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the Azure OpenAI language model deployment. |
client_kwargs |
dict[str, Any]
|
The keyword arguments for the Azure OpenAI client. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Tool]
|
The list of tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig
|
The retry configuration for the language model. |
thinking |
ThinkingConfig
|
The thinking configuration for the language model. |
data_stores |
list[AttachmentStore]
|
The data stores to retrieve internal knowledge from. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes a new instance of the AzureOpenAILMInvoker class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
azure_endpoint
|
str
|
The endpoint of the Azure OpenAI service. |
required |
azure_deployment
|
str
|
The deployment name of the Azure OpenAI service. |
required |
api_key
|
str | None
|
The API key for authenticating with Azure OpenAI. Defaults to None, in
which case the |
None
|
model_kwargs
|
dict[str, Any] | None
|
Additional model parameters. Defaults to None. |
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None, in which case an empty dictionary is used. |
None
|
tools
|
list[LMTool] | None
|
Tools provided to the model to enable tool calling. Defaults to None, in which case an empty list is used. |
None
|
response_schema
|
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used. |
None
|
thinking
|
bool | ThinkingConfig
|
A boolean or ThinkingConfig object to configure thinking. Defaults to False. |
False
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
BedrockLMInvoker(model_name, access_key_id=None, secret_access_key=None, region_name='us-east-1', model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: BaseLMInvoker
A language model invoker to interact with AWS Bedrock language models.
Examples:
lm_invoker = BedrockLMInvoker(
model_name="us.anthropic.claude-sonnet-4-20250514-v1:0",
aws_access_key_id="<your-aws-access-key-id>",
aws_secret_access_key="<your-aws-secret-access-key>",
)
result = await lm_invoker.invoke("Hi there!")
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Document c. Image d. Video
- Structured output
- Tool calling
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the language model. |
session |
Session
|
The Bedrock client session. |
client_kwargs |
dict[str, Any]
|
The Bedrock client kwargs. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Tool]
|
Tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig
|
The retry configuration for the language model. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes the BedrockLMInvoker instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
The name of the Bedrock language model. |
required |
access_key_id
|
str | None
|
The AWS access key ID. Defaults to None, in which case
the |
None
|
secret_access_key
|
str | None
|
The AWS secret access key. Defaults to None, in which case
the |
None
|
region_name
|
str
|
The AWS region name. Defaults to "us-east-1". |
'us-east-1'
|
model_kwargs
|
dict[str, Any] | None
|
Additional keyword arguments for the Bedrock client. |
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None. |
None
|
tools
|
list[LMTool] | None
|
Tools provided to the model to enable tool calling. Defaults to None, in which case an empty list is used. |
None
|
response_schema
|
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used. |
None
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
DatasaurLMInvoker(base_url, api_key=None, model_kwargs=None, default_hyperparameters=None, output_analytics=False, retry_config=None, citations=False, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: OpenAIChatCompletionsLMInvoker
A language model invoker to interact with Datasaur LLM Projects Deployment API.
Examples:
lm_invoker = DatasaurLMInvoker(base_url="https://deployment.datasaur.ai/api/deployment/teamId/deploymentId/")
result = await lm_invoker.invoke("Hi there!")
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Audio c. Document d. Image
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
Citations
The DatasaurLMInvoker can be configured to output the citations used to generate the response.
This feature can be enabled by setting the citations parameter to True.
Citations outputs are stored in the outputs attribute of the LMOutput object and
can be accessed via the citations property.
Examples:
lm_invoker = DatasaurLMInvoker(..., citations=True)
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the language model. |
client_kwargs |
dict[str, Any]
|
The keyword arguments for the OpenAI client. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Any]
|
The list of tools provided to the model to enable tool calling. Currently not supported. |
response_schema |
ResponseSchema | None
|
The schema of the response. Currently not supported. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig | None
|
The retry configuration for the language model. |
citations |
bool
|
Whether to output the citations. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes a new instance of the DatasaurLMInvoker class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_url
|
str
|
The base URL of the Datasaur LLM Projects Deployment API. |
required |
api_key
|
str | None
|
The API key for authenticating with Datasaur LLM Projects Deployment API.
Defaults to None, in which case the |
None
|
model_kwargs
|
dict[str, Any] | None
|
Additional model parameters. Defaults to None. |
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used. |
None
|
citations
|
bool
|
Whether to output the citations. Defaults to False. |
False
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If the |
set_response_schema(response_schema)
Sets the response schema for the Datasaur LLM Projects Deployment API.
This method is raises a NotImplementedError because the Datasaur LLM Projects Deployment API does not
support response schema.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
response_schema
|
ResponseSchema | None
|
The response schema to be used. |
required |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
This method is not supported for the Datasaur LLM Projects Deployment API. |
set_tools(tools)
Sets the tools for the Datasaur LLM Projects Deployment API.
This method is raises a NotImplementedError because the Datasaur LLM Projects Deployment API does not
support tools.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tools
|
list[LMTool]
|
The list of tools to be used. |
required |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
This method is not supported for the Datasaur LLM Projects Deployment API. |
GoogleLMInvoker(model_name, api_key=None, credentials_path=None, project_id=None, location='us-central1', model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, thinking=None, data_stores=None, auto_upload=True, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: BaseLMInvoker
A language model invoker to interact with Google language models.
Examples:
lm_invoker = GoogleLMInvoker(model_name="gemini-2.5-flash")
result = await lm_invoker.invoke("Hi there!")
Authentication
The GoogleLMInvoker can use either Google Gen AI or Google Vertex AI.
Google Gen AI is recommended for quick prototyping and development. It requires a Gemini API key for authentication.
Usage example:
lm_invoker = GoogleLMInvoker(
model_name="gemini-2.5-flash",
api_key="your_api_key"
)
Google Vertex AI is recommended to build production-ready applications. It requires a service account JSON file for authentication.
Usage example:
lm_invoker = GoogleLMInvoker(
model_name="gemini-2.5-flash",
credentials_path="path/to/service_account.json"
)
If neither api_key nor credentials_path is provided, Google Gen AI will be used by default.
The GOOGLE_API_KEY environment variable will be used for authentication.
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Audio c. Document d. Image e. Video
- Structured output
- Tool calling
- Native tools a. Code interpreter b. Data store c. Image generation d. Web search
- Thinking
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer c. Batch invocation d. File management e. Data store management
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the language model. |
client_params |
dict[str, Any]
|
The Google client instance init parameters. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Tool]
|
The list of tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig | None
|
The retry configuration for the language model. |
thinking |
ThinkingConfig
|
The thinking configuration for the language model. |
image_generation |
bool
|
Whether to generate image. Only allowed for image generation models. |
data_stores |
list[AttachmentStore]
|
The data stores to retrieve internal knowledge from. |
auto_upload |
bool
|
Whether to automatically upload attachments to files API if the inputs total size exceeds the threshold of 20MB. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes a new instance of the GoogleLMInvoker class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
The name of the model to use. |
required |
api_key
|
str | None
|
Required for Google Gen AI authentication. Cannot be used together
with |
None
|
credentials_path
|
str | None
|
Required for Google Vertex AI authentication. Path to the service
account credentials JSON file. Cannot be used together with |
None
|
project_id
|
str | None
|
The Google Cloud project ID for Vertex AI. Only used when authenticating
with |
None
|
location
|
str
|
The location of the Google Cloud project for Vertex AI. Only used when
authenticating with |
'us-central1'
|
model_kwargs
|
dict[str, Any] | None
|
Additional keyword arguments for the Google Vertex AI client. |
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None. |
None
|
tools
|
list[LMTool] | None
|
Tools provided to the model to enable tool calling. Defaults to None. |
None
|
response_schema
|
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used. |
None
|
thinking
|
bool | ThinkingConfig | None
|
A boolean or ThinkingConfig object to configure thinking. Defaults to None. |
None
|
data_stores
|
list[AttachmentStore] | None
|
The data stores to retrieve internal knowledge from. Defaults to None, in which case no data stores will be used. |
None
|
auto_upload
|
bool
|
Whether to automatically upload attachments to files API if the inputs total size exceeds the threshold of 20MB. Defaults to True. |
True
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
Note
If neither api_key nor credentials_path is provided, Google Gen AI will be used by default.
The GOOGLE_API_KEY environment variable will be used for authentication.
batch
cached
property
The batch operations for the language model.
Returns:
| Name | Type | Description |
|---|---|---|
GoogleBatchOperations |
GoogleBatchOperations
|
The batch operations for the language model. |
data_store
cached
property
The data store operations for the language model.
Returns:
| Name | Type | Description |
|---|---|---|
GoogleDataStoreOperations |
GoogleDataStoreOperations
|
The data store operations for the language model. |
file
cached
property
The file operations for the language model.
Returns:
| Name | Type | Description |
|---|---|---|
GoogleFileOperations |
GoogleFileOperations
|
The file operations for the language model. |
set_data_stores(data_stores)
Sets the data stores for the Google language model.
This method sets the data stores for the Google language model. Any existing data stores will be replaced.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data_stores
|
list[AttachmentStore]
|
The list of data stores to be used. |
required |
LangChainLMInvoker(model=None, model_class_path=None, model_name=None, model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: BaseLMInvoker
A language model invoker to interact with LangChain's BaseChatModel.
Examples:
lm_invoker = LangChainLMInvoker(
model_class_path="langchain_openai.ChatOpenAI",
model_name="gpt-5-nano",
)
result = await lm_invoker.invoke("Hi there!")
Initialization
The LangChainLMInvoker can be initialized by either passing:
- A LangChain's BaseChatModel instance: Usage example:
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-5-nano", api_key="your_api_key")
lm_invoker = LangChainLMInvoker(model=model)
- A model path in the format of "
. ": Usage example:
lm_invoker = LangChainLMInvoker(
model_class_path="langchain_openai.ChatOpenAI",
model_name="gpt-5-nano",
model_kwargs={"api_key": "your_api_key"}
)
For the list of supported providers, please refer to the following table: https://docs.langchain.com/oss/python/integrations/providers/overview#featured-providers
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Image
- Structured output
- Tool calling
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the language model. |
model |
BaseChatModel
|
The LangChain's BaseChatModel instance. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Any]
|
The list of tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig | None
|
The retry configuration for the language model. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes a new instance of the LangChainLMInvoker class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
BaseChatModel | None
|
The LangChain's BaseChatModel instance. If provided, will take
precedence over the |
None
|
model_class_path
|
str | None
|
The LangChain's BaseChatModel class path. Must be formatted as
" |
None
|
model_name
|
str | None
|
The model name. Only used if |
None
|
model_kwargs
|
dict[str, Any] | None
|
The additional keyword arguments. Only used if
|
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None. |
None
|
tools
|
list[LMTool] | None
|
Tools provided to the model to enable tool calling. Defaults to None. |
None
|
response_schema
|
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used. |
None
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
LiteLLMLMInvoker(model_id, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, thinking=False, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: OpenAIChatCompletionsLMInvoker
A language model invoker to interact with language models using LiteLLM.
Examples:
lm_invoker = LiteLLMLMInvoker(model_id="openai/gpt-5-nano")
result = await lm_invoker.invoke("Hi there!")
Initialization
The LiteLLMLMInvoker provides an interface to interact with multiple language model providers.
In order to use this class:
1. The model_id parameter must be in the format of provider/model_name. e.g. openai/gpt-4o-mini.
2. The required credentials must be provided via the environment variables.
Usage example:
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
lm_invoker = LiteLLMLMInvoker(model_id="openai/gpt-4o-mini")
For the complete list of supported providers and their required credentials, please refer to the LiteLLM documentation: https://docs.litellm.ai/docs/providers/
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Audio c. Image
- Structured output
- Tool calling
- Thinking
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the language model. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Tool]
|
The list of tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig | None
|
The retry configuration for the language model. |
thinking |
ThinkingConfig
|
The thinking configuration for the language model. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes a new instance of the LiteLLMLMInvoker class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
The ID of the model to use. Must be in the format of |
required |
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None. |
None
|
tools
|
list[LMTool] | None
|
Tools provided to the model to enable tool calling. Defaults to None. |
None
|
response_schema
|
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used. |
None
|
thinking
|
bool | ThinkingConfig
|
A boolean or ThinkingConfig object to configure thinking. Defaults to False. |
False
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
OpenAIChatCompletionsLMInvoker(model_name, api_key=None, base_url=OPENAI_DEFAULT_URL, model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, thinking=False, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: StreamingBufferMixin, BaseLMInvoker
A language model invoker to interact with OpenAI language models using the Chat Completions API.
This class provides support for OpenAI's Chat Completions API schema. Use this class only when you have
a specific reason to use the Chat Completions API over the Responses API, as OpenAI recommends using
the Responses API whenever possible. The Responses API schema is supported through the OpenAILMInvoker class.
Examples:
lm_invoker = OpenAIChatCompletionsLMInvoker(model_name="gpt-5-nano")
result = await lm_invoker.invoke("Hi there!")
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Audio c. Document d. Image
- Structured output
- Tool calling
- Thinking
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
OpenAI compatible endpoints
This class can interact with endpoints compatible with OpenAI's Chat Completions API schema.
This includes but are not limited to:
1. DeepInfra (https://deepinfra.com/)
2. DeepSeek (https://deepseek.com/)
3. Groq (https://groq.com/)
4. OpenRouter (https://openrouter.ai/)
5. Text Generation Inference (https://github.com/huggingface/text-generation-inference)
6. Together.ai (https://together.ai/)
7. vLLM (https://vllm.ai/)
To do this, simply set base_url to the endpoint URL.
Supported features may vary between endpoints.
Unsupported features will result in an error.
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the language model. |
client_kwargs |
dict[str, Any]
|
The keyword arguments for the OpenAI client. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Tool]
|
The list of tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
thinking |
ThinkingConfig
|
The thinking configuration for the language model. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig | None
|
The retry configuration for the language model. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes a new instance of the OpenAIChatCompletionsLMInvoker class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
The name of the OpenAI model. |
required |
api_key
|
str | None
|
The API key for authenticating with OpenAI. Defaults to None, in which
case the |
None
|
base_url
|
str
|
The base URL of a custom endpoint that is compatible with OpenAI's Chat Completions API schema. Defaults to OpenAI's default URL. |
OPENAI_DEFAULT_URL
|
model_kwargs
|
dict[str, Any] | None
|
Additional model parameters. Defaults to None. |
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None. |
None
|
tools
|
list[LMTool] | None
|
Tools provided to the model to enable tool calling. Defaults to None. |
None
|
response_schema
|
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used. |
None
|
thinking
|
bool | ThinkingConfig
|
A boolean or ThinkingConfig object to configure thinking. Defaults to False. |
False
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
OpenAILMInvoker(model_name, api_key=None, base_url=OPENAI_DEFAULT_URL, model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, thinking=False, image_generation=False, data_stores=None, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: BaseLMInvoker
A language model invoker to interact with OpenAI language models.
This class provides support for OpenAI's Responses API schema, which is recommended by OpenAI as the preferred API
to use whenever possible. Use this class unless you have a specific reason to use the Chat Completions API instead.
The Chat Completions API schema is supported through the OpenAIChatCompletionsLMInvoker class.
Examples:
lm_invoker = OpenAILMInvoker(model_name="gpt-5-nano")
result = await lm_invoker.invoke("Hi there!")
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Document c. Image
- Structured output
- Tool calling
- Native tools a. Code interpreter b. Data store c. Image generation d. MCP connector e. MCP server f. Web search
- Thinking
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer c. Batch invocation d. File management e. Data store management
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
OpenAI compatible endpoints
This class can interact with endpoints compatible with OpenAI's Responses API schema
(e.g. SGLang: https://github.com/sgl-project/sglang). To do this, simply set base_url to the endpoint URL.
Supported features may vary between endpoints. Unsupported features will result in an error.
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the language model. |
client_kwargs |
dict[str, Any]
|
The keyword arguments for the OpenAI client. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Tool]
|
The list of tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig
|
The retry configuration for the language model. |
thinking |
ThinkingConfig
|
The thinking configuration for the language model. |
image_generation |
bool
|
Whether to enable image generation. |
data_stores |
list[AttachmentStore]
|
The data stores to retrieve internal knowledge from. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes a new instance of the OpenAILMInvoker class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
The name of the OpenAI model. |
required |
api_key
|
str | None
|
The API key for authenticating with OpenAI. Defaults to None, in which
case the |
None
|
base_url
|
str
|
The base URL of a custom endpoint that is compatible with OpenAI's Responses API schema. Defaults to OpenAI's default URL. |
OPENAI_DEFAULT_URL
|
model_kwargs
|
dict[str, Any] | None
|
Additional model parameters. Defaults to None. |
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None. |
None
|
tools
|
list[LMTool] | None
|
Tools provided to the model to enable tool calling. Defaults to None, in which case an empty list is used. |
None
|
response_schema
|
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used. |
None
|
thinking
|
bool | ThinkingConfig
|
A boolean or ThinkingConfig object to configure thinking. Defaults to False. |
False
|
image_generation
|
bool
|
Whether to enable image generation. Defaults to False. |
False
|
data_stores
|
list[AttachmentStore] | None
|
The data stores to retrieve internal knowledge from. Defaults to None, in which case no data stores will be used. |
None
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
batch
cached
property
The batch operations for the language model.
Returns:
| Name | Type | Description |
|---|---|---|
OpenAIBatchOperations |
OpenAIBatchOperations
|
The batch operations for the language model. |
data_store
cached
property
The data store operations for the language model.
Returns:
| Name | Type | Description |
|---|---|---|
OpenAIDataStoreOperations |
OpenAIDataStoreOperations
|
The data store operations for the language model. |
file
cached
property
The file operations for the language model.
Returns:
| Name | Type | Description |
|---|---|---|
OpenAIFileOperations |
OpenAIFileOperations
|
The file operations for the language model. |
set_data_stores(data_stores)
Sets the data stores for the OpenAI language model.
This method sets the data stores for the OpenAI language model. Any existing data stores will be replaced.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data_stores
|
list[AttachmentStore]
|
The list of data stores to be used. |
required |
PortkeyLMInvoker(model_name=None, portkey_api_key=None, provider=None, api_key=None, config=None, custom_host=None, model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, thinking=False, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: OpenAIChatCompletionsLMInvoker
A language model invoker to interact with Portkey's Universal API.
This class provides support for Portkey's Universal AI Gateway, which enables unified access to
multiple providers (e.g., OpenAI, Anthropic, Google, Cohere, Bedrock) via a single API key.
The PortkeyLMInvoker is compatible with all Portkey model routing configurations, including
model catalog entries, direct providers, and pre-defined configs.
Examples:
The PortkeyLMInvoker supports multiple authentication methods with strict precedence order.
Authentication methods are mutually exclusive and cannot be combined.
Authentication Precedence (Highest to Lowest):
1. Config ID Authentication (Highest precedence)
Use a pre-configured routing setup from Portkey’s dashboard.
python
lm_invoker = PortkeyLMInvoker(
portkey_api_key="<your-portkey-api-key>",
config="pc-openai-4f6905",
)
- Model Catalog Authentication Provider name must match the provider name set in the model catalog. More details to set up the model catalog can be found in https://portkey.ai/docs/product/model-catalog#model-catalog. There are two ways to specify the model name:
2.1. Using Combined Model Name Format
Specify the model_name in '@provider-name/model-name' format.
python
lm_invoker = PortkeyLMInvoker(
portkey_api_key="<your-portkey-api-key>",
model_name="@openai-custom/gpt-4o"
)
2.2. Using Separate Provider and Model Name Parameters
Specify the provider in '@provider-name' format and model_name separately.
python
lm_invoker = PortkeyLMInvoker(
portkey_api_key="<your-portkey-api-key>",
provider="@openai-custom",
model_name="gpt-4o",
)
- Direct Provider Authentication
Use the
providerin 'provider-name' format andmodel_nameparameters.python lm_invoker = PortkeyLMInvoker( portkey_api_key="<your-portkey-api-key>", provider="openai", model_name="gpt-4o", api_key="sk-...", )
Custom Host
You can also use the custom_host parameter to override the default host. This is available
for all authentication methods except for Config ID authentication.
lm_invoker = PortkeyLMInvoker(..., custom_host="https://your-custom-endpoint.com")
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Audio c. Document d. Image
- Structured output
- Tool calling
- Thinking
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The catalog name of the language model. |
client_kwargs |
dict[str, Any]
|
The keyword arguments for the Portkey client. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Tool]
|
The list of tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig
|
The retry configuration for the language model. |
thinking |
ThinkingConfig
|
The thinking configuration for the language model. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes a new instance of the PortkeyLMInvoker class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str | None
|
The name of the model to use. Acceptable formats: 1. 'model' for direct authentication, 2. '@provider-slug/model' for model catalog authentication. Defaults to None. |
None
|
portkey_api_key
|
str | None
|
The Portkey API key. Defaults to None, in which
case the |
None
|
provider
|
str | None
|
Provider name or catalog slug. Acceptable formats: 1. '@provider-slug' for model catalog authentication (no api_key needed), 2. 'provider' for direct authentication (requires api_key). Will be combined with model_name if model name is not in the format '@provider-slug/model'. Defaults to None. |
None
|
api_key
|
str | None
|
Provider's API key for direct authentication. Must be used with 'provider' parameter (without '@' prefix). Not needed for catalog providers. Defaults to None. |
None
|
config
|
str | None
|
Portkey config ID for complex routing configurations, load balancing, or fallback scenarios. Defaults to None. |
None
|
custom_host
|
str | None
|
Custom host URL for self-hosted or custom endpoints. Can be combined with catalog providers. Defaults to None. |
None
|
model_kwargs
|
dict[str, Any] | None
|
Additional model parameters and authentication. Defaults to None. |
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for model invocation (temperature, max_tokens, etc.). Defaults to None. |
None
|
tools
|
list[LMTool] | None
|
Tools for enabling tool calling functionality. Defaults to None. |
None
|
response_schema
|
ResponseSchema | None
|
Schema for structured output generation. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output detailed invocation analytics including token usage and timing. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
Configuration for retry behavior on failures. Defaults to None. |
None
|
thinking
|
bool | ThinkingConfig
|
A boolean or ThinkingConfig object to configure thinking. Defaults to False. |
False
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
SeaLionLMInvoker(model_name, api_key=None, model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: OpenAIChatCompletionsLMInvoker
A language model invoker to interact with SEA-LION API.
Examples:
lm_invoker = SeaLionLMInvoker(model_id="aisingapore/Qwen-SEA-LION-v4-32B-IT")
result = await lm_invoker.invoke("Hi there!")
Supported features
- Basic invocation
- Streaming output
- Message roles
- Structured output
- Tool calling
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the language model. |
client_kwargs |
dict[str, Any]
|
The keyword arguments for the OpenAI client. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Tool]
|
The list of tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig | None
|
The retry configuration for the language model. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes a new instance of the SeaLionLMInvoker class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
The name of the SEA-LION language model. |
required |
api_key
|
str | None
|
The API key for authenticating with the SEA-LION API.
Defaults to None, in which case the |
None
|
model_kwargs
|
dict[str, Any] | None
|
Additional model parameters. Defaults to None. |
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None. |
None
|
tools
|
list[LMTool] | None
|
Tools provided to the model to enable tool calling. Defaults to None. |
None
|
response_schema
|
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used. |
None
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|
XAILMInvoker(model_name, api_key=None, model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, thinking=False, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)
Bases: StreamingBufferMixin, BaseLMInvoker
A language model invoker to interact with xAI language models.
Examples:
lm_invoker = XAILMInvoker(model_name="grok-3")
result = await lm_invoker.invoke("Hi there!")
Supported features
- Basic invocation
- Streaming output
- Message roles
- Multimodal input a. Text b. Image
- Structured output
- Tool calling
- Native tools a. Web search b. Image generation
- Thinking
- Output analytics
- Retry and timeout
- Extra capabilities a. Input transformer b. Output transformer
For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker
Attributes:
| Name | Type | Description |
|---|---|---|
model_id |
str
|
The model ID of the language model. |
model_provider |
str
|
The provider of the language model. |
model_name |
str
|
The name of the language model. |
client_params |
dict[str, Any]
|
The xAI client initialization parameters. |
default_config |
dict[str, Any]
|
Default config for invoking the model. |
tools |
list[Tool]
|
The list of tools provided to the model to enable tool calling. |
response_schema |
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. |
output_analytics |
bool
|
Whether to output the invocation analytics. |
retry_config |
RetryConfig | None
|
The retry configuration for the language model. |
thinking |
ThinkingConfig
|
The thinking configuration for the language model. |
image_generation |
bool
|
Whether to enable image generation. |
input_transformer |
InputTransformerType
|
The type of input transformer to use. |
output_transformer |
OutputTransformerType
|
The type of output transformer to use. |
Initializes a new instance of the XAILMInvoker class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
The name of the xAI model. |
required |
api_key
|
str | None
|
The API key for authenticating with xAI. Defaults to None, in which
case the |
None
|
model_kwargs
|
dict[str, Any] | None
|
Additional model parameters. Defaults to None. |
None
|
default_hyperparameters
|
dict[str, Any] | None
|
Default hyperparameters for invoking the model. Defaults to None. |
None
|
tools
|
list[LMTool] | None
|
Tools provided to the language model to enable tool calling. Defaults to None. |
None
|
response_schema
|
ResponseSchema | None
|
The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None. |
None
|
output_analytics
|
bool
|
Whether to output the invocation analytics. Defaults to False. |
False
|
retry_config
|
RetryConfig | None
|
The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout is used. |
None
|
thinking
|
bool | ThinkingConfig
|
A boolean or ThinkingConfig object to configure thinking. Defaults to False. |
False
|
input_transformer
|
InputTransformerType
|
The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation. |
IDENTITY
|
output_transformer
|
OutputTransformerType
|
The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation. |
IDENTITY
|