Builder
Modules
_build_invoker
Defines an internal utility function to build a model invoker.
References
NONE
Key
Defines valid keys in the config.
build_em_invoker
Defines a convenience function to build an embedding model invoker.
References
NONE
build_em_invoker(model_id, credentials=None, config=None)
Build an embedding model invoker based on the provided configurations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id |
str | ModelId
|
The model id, can either be a ModelId instance or a string in a format defined in the following page: https://gdplabs.gitbook.io/sdk/resources/supported-models#embedding-models-ems |
required |
credentials |
str | dict[str, Any] | None
|
The credentials for the language model. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently only supported for LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate environment variables. |
None
|
config |
dict[str, Any] | None
|
Additional configuration for the embedding model. Defaults to None. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
BaseEMInvoker |
BaseEMInvoker
|
The initialized embedding model invoker. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the provider is invalid. |
Usage examples
Using Bedrock
em_invoker = build_em_invoker(
model_id="bedrock/cohere.embed-english-v3",
credentials={
"access_key_id": "Abc123...",
"secret_access_key": "Xyz123...",
},
)
The credentials can also be provided through the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
environment variables.
Using Google Gen AI (via API key)
em_invoker = build_em_invoker(
model_id="google/text-embedding-004",
credentials="AIzaSyD..."
)
The credentials can also be provided through the GOOGLE_API_KEY environment variable.
Using Google Vertex AI (via service account)
em_invoker = build_em_invoker(
model_id="google/text-embedding-004",
credentials="/path/to/google-credentials.json"
)
Providing credentials through environment variable is not supported for Google Vertex AI.
Using Jina
em_invoker = build_em_invoker(
model_id="jina/jina-embeddings-v2-large",
credentials="jina-api-key"
)
The credentials can also be provided through the JINA_API_KEY environment variable. For the list of supported
models, please refer to the following page: https://jina.ai/models
Using OpenAI
em_invoker = build_em_invoker(
model_id="openai/text-embedding-3-small",
credentials="sk-..."
)
The credentials can also be provided through the OPENAI_API_KEY environment variable.
Using OpenAI Embeddings API-compatible endpoints (e.g. vLLM)
em_invoker = build_em_invoker(
model_id="openai/https://my-vllm-url:8000/v1:my-model-name",
credentials="sk-..."
)
The credentials can also be provided through the OPENAI_API_KEY environment variable.
Using Azure OpenAI
em_invoker = build_em_invoker(
model_id="azure-openai/https://my-resource.openai.azure.com/openai/v1:my-deployment",
credentials="azure-api-key"
)
The credentials can also be provided through the AZURE_OPENAI_API_KEY environment variable.
Using TwelveLabs
em_invoker = build_em_invoker(
model_id="twelvelabs/Marengo-retrieval-2.7",
credentials="tlk_..."
)
The credentials can also be provided through the TWELVELABS_API_KEY environment variable.
Using Voyage
em_invoker = build_em_invoker(
model_id="voyage/voyage-3.5-lite",
credentials="sk-..."
)
The credentials can also be provided through the VOYAGE_API_KEY environment variable.
Using LangChain
em_invoker = build_em_invoker(
model_id="langchain/langchain_openai.OpenAIEmbeddings:text-embedding-3-small",
credentials={"api_key": "sk-..."}
)
The credentials can also be provided through various environment variables depending on the LangChain module being used. For the list of supported providers and the supported environment variables credentials, please refer to the following page: https://python.langchain.com/docs/integrations/text_embedding/
Security warning
Please provide the EM invoker credentials ONLY to the credentials parameter. Do not put any kind of
credentials in the config parameter as the content of the config parameter will be logged.
build_lm_invoker
Defines a convenience function to build a language model invoker.
References
NONE
build_lm_invoker(model_id, credentials=None, config=None)
Build a language model invoker based on the provided configurations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id |
str | ModelId
|
The model id, can either be a ModelId instance or a string in a format defined in the following page: https://gdplabs.gitbook.io/sdk/resources/supported-models#language-models-lms |
required |
credentials |
str | dict[str, Any] | None
|
The credentials for the language model. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently supported for Bedrock and LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate environment variables. |
None
|
config |
dict[str, Any] | None
|
Additional configuration for the language model. Defaults to None. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
BaseLMInvoker |
BaseLMInvoker
|
The initialized language model invoker. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the provider is invalid. |
Usage examples
Using Anthropic
lm_invoker = build_lm_invoker(
model_id="anthropic/claude-3-5-sonnet-latest",
credentials="sk-ant-api03-..."
)
The credentials can also be provided through the ANTHROPIC_API_KEY environment variable.
Using Bedrock
lm_invoker = build_lm_invoker(
model_id="bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0",
credentials={
"access_key_id": "Abc123...",
"secret_access_key": "Xyz123...",
},
)
The credentials can also be provided through the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
environment variables.
Using Datasaur LLM Projects Deployment API
lm_invoker = build_lm_invoker(
model_id="datasaur/https://deployment.datasaur.ai/api/deployment/teamId/deploymentId/",
credentials="..."
)
The credentials can also be provided through the DATASAUR_API_KEY environment variable.
Using Google Gen AI (via API key)
lm_invoker = build_lm_invoker(
model_id="google/gemini-2.5-flash-lite",
credentials="AIzaSyD..."
)
The credentials can also be provided through the GOOGLE_API_KEY environment variable.
Using Google Vertex AI (via service account)
lm_invoker = build_lm_invoker(
model_id="google/gemini-2.5-flash-lite",
credentials="/path/to/google-credentials.json"
)
Providing credentials through environment variable is not supported for Google Vertex AI.
Using OpenAI
lm_invoker = build_lm_invoker(
model_id="openai/gpt-5-nano",
credentials="sk-..."
)
The credentials can also be provided through the OPENAI_API_KEY environment variable.
Using OpenAI with Chat Completions API
lm_invoker = build_lm_invoker(
model_id="openai-chat-completions/gpt-5-nano",
credentials="sk-..."
)
The credentials can also be provided through the OPENAI_API_KEY environment variable.
Using OpenAI Responses API-compatible endpoints (e.g. SGLang)
lm_invoker = build_lm_invoker(
model_id="openai/https://my-sglang-url:8000/v1:my-model-name",
credentials="sk-..."
)
The credentials can also be provided through the OPENAI_API_KEY environment variable.
Using OpenAI Chat Completions API-compatible endpoints (e.g. Groq)
lm_invoker = build_lm_invoker(
model_id="openai-chat-completions/https://api.groq.com/openai/v1:llama3-8b-8192",
credentials="gsk_..."
)
The credentials can also be provided through the OPENAI_API_KEY environment variable.
Using Azure OpenAI
lm_invoker = build_lm_invoker(
model_id="azure-openai/https://my-resource.openai.azure.com/openai/v1:my-deployment",
credentials="azure-api-key"
)
The credentials can also be provided through the AZURE_OPENAI_API_KEY environment variable.
Using LangChain
lm_invoker = build_lm_invoker(
model_id="langchain/langchain_openai.ChatOpenAI:gpt-4o-mini",
credentials={"api_key": "sk-..."}
)
The credentials can also be provided through various environment variables depending on the LangChain module being used. For the list of supported providers and the supported environment variables credentials, please refer to the following table: https://python.langchain.com/docs/integrations/chat/#featured-providers
Using LiteLLM
os.environ["OPENAI_API_KEY"] = "sk-..."
lm_invoker = build_lm_invoker(
model_id="litellm/openai/gpt-4o-mini",
)
For the list of supported providers, please refer to the following page: https://docs.litellm.ai/docs/providers/
Using Portkey
Portkey supports multiple authentication methods with strict precedence order. Authentication methods are mutually exclusive and cannot be combined.
Config ID Authentication (Highest Precedence)
lm_invoker = build_lm_invoker(
model_id="portkey/any-model",
credentials="portkey-api-key",
config={"config": "pc-openai-4f6905"}
)
Model Catalog Authentication (Combined Format)
lm_invoker = build_lm_invoker(
model_id="portkey/@openai-custom/gpt-4o",
credentials="portkey-api-key"
)
Model Catalog Authentication (Separate Parameters)
lm_invoker = build_lm_invoker(
model_id="portkey/gpt-4o",
credentials="portkey-api-key",
config={"provider": "@openai-custom"}
)
Direct Provider Authentication
lm_invoker = build_lm_invoker(
model_id="portkey/gpt-4o",
credentials={
"portkey_api_key": "portkey-api-key",
"api_key": "sk-...", # Provider's API key
"provider": "openai" # Direct provider (no '@' prefix)
}
)
Custom Host Override
lm_invoker = build_lm_invoker(
model_id="portkey/@custom-provider/gpt-4o",
credentials="portkey-api-key",
config={"custom_host": "https://your-custom-endpoint.com"}
)
The Portkey API key can also be provided through the PORTKEY_API_KEY environment variable.
For more details on authentication methods, please refer to:
https://portkey.ai/docs/product/ai-gateway/universal-api
Using xAI
lm_invoker = build_lm_invoker(
model_id="xai/grok-3",
credentials="xai-..."
)
The credentials can also be provided through the XAI_API_KEY environment variable.
For the list of supported models, please refer to the following page:
https://docs.x.ai/docs/models
Security warning
Please provide the LM invoker credentials ONLY to the credentials parameter. Do not put any kind of
credentials in the config parameter as the content of the config parameter will be logged.
build_lm_request_processor
Defines a convenience function to build a language model request processor.
References
NONE
build_lm_request_processor(model_id, credentials=None, config=None, system_template='', user_template='', key_defaults=None, output_parser_type='none')
Build a language model invoker based on the provided configurations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id |
str | ModelId
|
The model id, can either be a ModelId instance or a string in a format defined in the following page: https://gdplabs.gitbook.io/sdk/resources/supported-models#language-models-lms |
required |
credentials |
str | dict[str, Any] | None
|
The credentials for the language model. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently supported for Bedrock and LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate environment variables. |
None
|
config |
dict[str, Any] | None
|
Additional configuration for the language model. Defaults to None. |
None
|
system_template |
str
|
The system prompt template. May contain placeholders enclosed in curly braces |
''
|
user_template |
str
|
The user prompt template. May contain placeholders enclosed in curly braces |
''
|
key_defaults |
dict[str, str] | None
|
Default values for the keys in the prompt templates. Applied when the corresponding keys are not provided in the runtime input. Defaults to None, in which case no default values will be assigned to the keys. |
None
|
output_parser_type |
str
|
The type of output parser to use. Supports "json" and "none". Defaults to "none". |
'none'
|
Returns:
| Name | Type | Description |
|---|---|---|
LMRequestProcessor |
LMRequestProcessor
|
The initialized language model request processor. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the provided configuration is invalid. |
Usage examples
# Basic usage
lm_request_processor = build_lm_request_processor(
model_id="openai/gpt-4o-mini",
credentials="sk-...",
user_template="{query}",
)
With custom LM invoker configuration
config = {
"default_hyperparameters": {"temperature": 0.5},
"tools": [tool_1, tool_2],
}
lm_request_processor = build_lm_request_processor(
model_id="openai/gpt-4o-mini",
credentials="sk-...",
config=config,
user_template="{query}",
)
With custom prompt builder configuration
lm_request_processor = build_lm_request_processor(
model_id="openai/gpt-4o-mini",
credentials="sk-...",
system_template="Talk like a {role}.",
user_template="{query}",
key_defaults={"role": "pirate"},
)
With output parser
lm_request_processor = build_lm_request_processor(
model_id="openai/gpt-4o-mini",
credentials="sk-...",
user_template="{query}",
output_parser_type="json",
)
Security warning
Please provide the LM invoker credentials ONLY to the credentials parameter. Do not put any kind of
credentials in the config parameter as the content of the config parameter will be logged.
build_output_parser
Defines a convenience function to build an output parser.
References
NONE
build_output_parser(output_parser_type)
Build an output parser based on the provided configurations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
output_parser_type |
str
|
The type of output parser to use. Supports "json" and "none". |
required |
Returns:
| Name | Type | Description |
|---|---|---|
BaseOutputParser |
BaseOutputParser | None
|
The initialized output parser. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the provided type is not supported. |
Usage examples
Using JSON output parser
output_parser = build_output_parser(output_parser_type="json")
Not using output parser
output_parser = build_output_parser(output_parser_type="none")