Builder
Modules
build_em_invoker
Deprecated import path for build_em_invoker.
This module provides backward compatibility for code importing from the old location. The actual implementation has been moved to gllm_inference.em_invoker.build_em_invoker.
build_em_invoker(model_id, credentials=None, config=None)
Deprecated function to build an embedding model invoker.
This function is deprecated and will be removed in v0.6.0. Use
from gllm_inference.em_invoker.build_em_invoker instead.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str | ModelId
|
The model id. |
required |
credentials
|
str | dict[str, Any] | None
|
The credentials. |
None
|
config
|
dict[str, Any] | None
|
Additional configuration. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
BaseEMInvoker |
BaseEMInvoker
|
The initialized embedding model invoker. |
build_lm_invoker
Deprecated import path for build_lm_invoker.
This module provides backward compatibility for code importing from the old location. The actual implementation has been moved to gllm_inference.lm_invoker.build_lm_invoker.
build_lm_invoker(model_id, credentials=None, config=None)
Deprecated function to build a language model invoker.
This function is deprecated and will be removed in v0.6.0. Use
from gllm_inference.lm_invoker.build_lm_invoker instead.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str | ModelId
|
The model id. |
required |
credentials
|
str | dict[str, Any] | None
|
The credentials. |
None
|
config
|
dict[str, Any] | None
|
Additional configuration. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
BaseLMInvoker |
BaseLMInvoker
|
The initialized language model invoker. |
build_lm_request_processor
Deprecated import path for build_lm_request_processor.
This module provides backward compatibility for code importing from the old location. The actual implementation has been moved to gllm_inference.request_processor.build_lm_request_processor.
build_lm_invoker(model_id, credentials=None, config=None)
Build a language model invoker based on the provided configurations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str | ModelId
|
The model id, can either be a ModelId instance or a string in a format defined in the following page: https://gdplabs.gitbook.io/sdk/resources/supported-models#language-models-lms |
required |
credentials
|
str | dict[str, Any] | None
|
The credentials for the language model. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently supported for Bedrock and LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate environment variables. |
None
|
config
|
dict[str, Any] | None
|
Additional configuration for the language model. Defaults to None. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
BaseLMInvoker |
BaseLMInvoker
|
The initialized language model invoker. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the provider is invalid. |
Usage examples
Using Anthropic
lm_invoker = build_lm_invoker(
model_id="anthropic/claude-3-5-sonnet-latest",
credentials="sk-ant-api03-..."
)
The credentials can also be provided through the ANTHROPIC_API_KEY environment variable.
Using Bedrock
lm_invoker = build_lm_invoker(
model_id="bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0",
credentials={
"access_key_id": "Abc123...",
"secret_access_key": "Xyz123...",
},
)
The credentials can also be provided through the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
environment variables.
Using Datasaur LLM Projects Deployment API
lm_invoker = build_lm_invoker(
model_id="datasaur/https://deployment.datasaur.ai/api/deployment/teamId/deploymentId/",
credentials="..."
)
The credentials can also be provided through the DATASAUR_API_KEY environment variable.
Using Google Gen AI (via API key)
lm_invoker = build_lm_invoker(
model_id="google/gemini-2.5-flash-lite",
credentials="AIzaSyD..."
)
The credentials can also be provided through the GOOGLE_API_KEY environment variable.
Using Google Vertex AI (via service account)
lm_invoker = build_lm_invoker(
model_id="google/gemini-2.5-flash-lite",
credentials="/path/to/google-credentials.json"
)
Providing credentials through environment variable is not supported for Google Vertex AI.
Using OpenAI
lm_invoker = build_lm_invoker(
model_id="openai/gpt-5-nano",
credentials="sk-..."
)
The credentials can also be provided through the OPENAI_API_KEY environment variable.
Using OpenAI with Chat Completions API
lm_invoker = build_lm_invoker(
model_id="openai-chat-completions/gpt-5-nano",
credentials="sk-..."
)
The credentials can also be provided through the OPENAI_API_KEY environment variable.
Using OpenAI Responses API-compatible endpoints (e.g. SGLang)
lm_invoker = build_lm_invoker(
model_id="openai/https://my-sglang-url:8000/v1:my-model-name",
credentials="sk-..."
)
The credentials can also be provided through the OPENAI_API_KEY environment variable.
Using OpenAI Chat Completions API-compatible endpoints (e.g. Groq)
lm_invoker = build_lm_invoker(
model_id="openai-chat-completions/https://api.groq.com/openai/v1:llama3-8b-8192",
credentials="gsk_..."
)
The credentials can also be provided through the OPENAI_API_KEY environment variable.
Using Azure OpenAI
lm_invoker = build_lm_invoker(
model_id="azure-openai/https://my-resource.openai.azure.com/openai/v1:my-deployment",
credentials="azure-api-key"
)
The credentials can also be provided through the AZURE_OPENAI_API_KEY environment variable.
Using SEA-LION
lm_invoker = build_lm_invoker(
model_id="sea-lion/aisingapore/Qwen-SEA-LION-v4-32B-IT",
credentials="sk-..."
)
The credentials can also be provided through the SEA_LION_API_KEY environment variable.
Using LangChain
lm_invoker = build_lm_invoker(
model_id="langchain/langchain_openai.ChatOpenAI:gpt-4o-mini",
credentials={"api_key": "sk-..."}
)
The credentials can also be provided through various environment variables depending on the LangChain module being used. For the list of supported providers and the supported environment variables credentials, please refer to the following table: https://python.langchain.com/docs/integrations/chat/#featured-providers
Using LiteLLM
os.environ["OPENAI_API_KEY"] = "sk-..."
lm_invoker = build_lm_invoker(
model_id="litellm/openai/gpt-4o-mini",
)
For the list of supported providers, please refer to the following page: https://docs.litellm.ai/docs/providers/
Using Portkey
Portkey supports multiple authentication methods with strict precedence order. Authentication methods are mutually exclusive and cannot be combined.
Config ID Authentication (Highest Precedence)
lm_invoker = build_lm_invoker(
model_id="portkey/any-model",
credentials="portkey-api-key",
config={"config": "pc-openai-4f6905"}
)
Model Catalog Authentication (Combined Format)
lm_invoker = build_lm_invoker(
model_id="portkey/@openai-custom/gpt-4o",
credentials="portkey-api-key"
)
Model Catalog Authentication (Separate Parameters)
lm_invoker = build_lm_invoker(
model_id="portkey/gpt-4o",
credentials="portkey-api-key",
config={"provider": "@openai-custom"}
)
Direct Provider Authentication
lm_invoker = build_lm_invoker(
model_id="portkey/gpt-4o",
credentials={
"portkey_api_key": "portkey-api-key",
"api_key": "sk-...", # Provider's API key
"provider": "openai" # Direct provider (no '@' prefix)
}
)
Custom Host Override
lm_invoker = build_lm_invoker(
model_id="portkey/@custom-provider/gpt-4o",
credentials="portkey-api-key",
config={"custom_host": "https://your-custom-endpoint.com"}
)
The Portkey API key can also be provided through the PORTKEY_API_KEY environment variable.
For more details on authentication methods, please refer to:
https://portkey.ai/docs/product/ai-gateway/universal-api
Using xAI
lm_invoker = build_lm_invoker(
model_id="xai/grok-3",
credentials="xai-..."
)
The credentials can also be provided through the XAI_API_KEY environment variable.
For the list of supported models, please refer to the following page:
https://docs.x.ai/docs/models
Security warning
Please provide the LM invoker credentials ONLY to the credentials parameter. Do not put any kind of
credentials in the config parameter as the content of the config parameter will be logged.
build_lm_request_processor(model_id, credentials=None, config=None, system_template='', user_template='', key_defaults=None, prompt_builder_kwargs=None, output_parser_type='none')
Deprecated function to build a language model request processor.
This function is deprecated and will be removed in v0.6.0. Use
from gllm_inference.request_processor.build_lm_request_processor instead.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str | ModelId
|
The model id. |
required |
credentials
|
str | dict[str, Any] | None
|
The credentials. |
None
|
config
|
dict[str, Any] | None
|
Additional configuration. |
None
|
system_template
|
str
|
The system prompt template. Defaults to an empty string. |
''
|
user_template
|
str
|
The user prompt template. Defaults to an empty string. |
''
|
key_defaults
|
dict[str, Any] | None
|
Default values for template keys. |
None
|
prompt_builder_kwargs
|
dict[str, Any] | None
|
Additional prompt builder kwargs. |
None
|
output_parser_type
|
str
|
The output parser type. Defaults to "none". |
'none'
|
Returns:
| Name | Type | Description |
|---|---|---|
LMRequestProcessor |
LMRequestProcessor
|
The initialized language model request processor. |
build_output_parser
Defines a convenience function to build an output parser.
build_output_parser(output_parser_type)
Build an output parser based on the provided configurations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
output_parser_type
|
str
|
The type of output parser to use. Supports "json" and "none". |
required |
Returns:
| Name | Type | Description |
|---|---|---|
BaseOutputParser |
BaseOutputParser | None
|
The initialized output parser. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the provided type is not supported. |
Usage examples
Using JSON output parser
output_parser = build_output_parser(output_parser_type="json")
Not using output parser
output_parser = build_output_parser(output_parser_type="none")