Builder
Modules
build_em_invoker
Defines a convenience function to build an embedding model invoker.
References
NONE
Key
Defines valid keys in the config.
build_em_invoker(model_id, credentials=None, config=None)
Build an embedding model invoker based on the provided configurations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id |
str | ModelId
|
The model id, can either be a ModelId instance or a string in the following format:
1. For |
required |
credentials |
str | dict[str, Any] | None
|
The credentials for the language model. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently only supported for LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate environment variables. |
None
|
config |
dict[str, Any] | None
|
Additional configuration for the embedding model. Defaults to None. |
None
|
Returns:
Name | Type | Description |
---|---|---|
BaseEMInvoker |
BaseEMInvoker
|
The initialized embedding model invoker. |
Raises:
Type | Description |
---|---|
ValueError
|
If the provider is invalid. |
Usage examples
Using Google Gen AI (via API key)
em_invoker = build_em_invoker(
model_id="google/text-embedding-004",
credentials="AIzaSyD..."
)
The credentials can also be provided through the GOOGLE_API_KEY
environment variable.
Using Google Vertex AI (via service account)
em_invoker = build_em_invoker(
model_id="google/text-embedding-004",
credentials="/path/to/google-credentials.json"
)
Providing credentials through environment variable is not supported for Google Vertex AI.
Using OpenAI
em_invoker = build_em_invoker(
model_id="openai/text-embedding-3-small",
credentials="sk-..."
)
The credentials can also be provided through the OPENAI_API_KEY
environment variable.
Using Azure OpenAI
em_invoker = build_em_invoker(
model_id="azure-openai/https://my-resource.openai.azure.com:my-deployment",
credentials="azure-api-key"
)
The credentials can also be provided through the AZURE_OPENAI_API_KEY
environment variable.
Using OpenAI Compatible endpoint (e.g. Text Embeddings Inference)
em_invoker = build_em_invoker(
model_id="openai-compatible/https://my-text-embeddings-inference-endpoint.com:model-name",
credentials="tei-api-key"
)
The credentials can also be provided through the OPENAI_API_KEY
environment variable.
Using TwelveLabs
em_invoker = build_em_invoker(
model_id="twelvelabs/Marengo-retrieval-2.7",
credentials="tlk_..."
)
The credentials can also be provided through the TWELVELABS_API_KEY
environment variable.
Using Voyage
em_invoker = build_em_invoker(
model_id="voyage/voyage-3.5-lite",
credentials="sk-..."
)
The credentials can also be provided through the VOYAGE_API_KEY
environment variable.
Using LangChain
em_invoker = build_em_invoker(
model_id="langchain/langchain_openai.OpenAIEmbeddings:text-embedding-3-small",
credentials={"api_key": "sk-..."}
)
The credentials can also be provided through various environment variables depending on the LangChain module being used. For the list of supported providers and the supported environment variables credentials, please refer to the following page: https://python.langchain.com/docs/integrations/text_embedding/
Security warning
Please provide the EM invoker credentials ONLY to the credentials
parameter. Do not put any kind of
credentials in the config
parameter as the content of the config
parameter will be logged.
build_lm_invoker
Defines a convenience function to build a language model invoker.
References
NONE
Key
Defines valid keys in the config.
build_lm_invoker(model_id, credentials=None, config=None)
Build a language model invoker based on the provided configurations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id |
str | ModelId
|
The model id, can either be a ModelId instance or a string in the following format:
1. For |
required |
credentials |
str | dict[str, Any] | None
|
The credentials for the language model. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently supported for Bedrock and LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate environment variables. |
None
|
config |
dict[str, Any] | None
|
Additional configuration for the language model. Defaults to None. |
None
|
Returns:
Name | Type | Description |
---|---|---|
BaseLMInvoker |
BaseLMInvoker
|
The initialized language model invoker. |
Raises:
Type | Description |
---|---|
ValueError
|
If the provider is invalid. |
Usage examples
Using Anthropic
lm_invoker = build_lm_invoker(
model_id="anthropic/claude-3-5-sonnet-latest",
credentials="sk-ant-api03-..."
)
The credentials can also be provided through the ANTHROPIC_API_KEY
environment variable.
Using Bedrock
lm_invoker = build_lm_invoker(
model_id="bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0",
credentials={
"access_key_id": "Abc123...",
"secret_access_key": "Xyz123...",
},
)
Providing credentials through environment variable is not supported for Bedrock.
Using Datasaur LLM Projects Deployment API
lm_invoker = build_lm_invoker(
model_id="datasaur/https://deployment.datasaur.ai/api/deployment/teamId/deploymentId/",
credentials="..."
)
The credentials can also be provided through the DATASAUR_API_KEY
environment variable.
Using Google Gen AI (via API key)
lm_invoker = build_lm_invoker(
model_id="google/gemini-1.5-flash-latest",
credentials="AIzaSyD..."
)
The credentials can also be provided through the GOOGLE_API_KEY
environment variable.
Using Google Vertex AI (via service account)
lm_invoker = build_lm_invoker(
model_id="google/gemini-1.5-flash",
credentials="/path/to/google-credentials.json"
)
Providing credentials through environment variable is not supported for Google Vertex AI.
Using OpenAI
lm_invoker = build_lm_invoker(
model_id="openai/gpt-4o-mini",
credentials="sk-..."
)
The credentials can also be provided through the OPENAI_API_KEY
environment variable.
Using Azure OpenAI
lm_invoker = build_lm_invoker(
model_id="azure-openai/https://my-resource.openai.azure.com:my-deployment",
credentials="azure-api-key"
)
The credentials can also be provided through the AZURE_OPENAI_API_KEY
environment variable.
Using OpenAI Compatible endpoint (e.g. Groq)
lm_invoker = build_lm_invoker(
model_id="openai-compatible/https://api.groq.com/openai/v1:llama3-8b-8192",
credentials="gsk_..."
)
The credentials can also be provided through the OPENAI_API_KEY
environment variable.
Using LangChain
lm_invoker = build_lm_invoker(
model_id="langchain/langchain_openai.ChatOpenAI:gpt-4o-mini",
credentials={"api_key": "sk-..."}
)
The credentials can also be provided through various environment variables depending on the LangChain module being used. For the list of supported providers and the supported environment variables credentials, please refer to the following table: https://python.langchain.com/docs/integrations/chat/#featured-providers
Using LiteLLM
os.environ["OPENAI_API_KEY"] = "sk-..."
lm_invoker = build_lm_invoker(
model_id="litellm/openai/gpt-4o-mini",
)
For the list of supported providers, please refer to the following page: https://docs.litellm.ai/docs/providers/
Security warning
Please provide the LM invoker credentials ONLY to the credentials
parameter. Do not put any kind of
credentials in the config
parameter as the content of the config
parameter will be logged.
build_lm_request_processor
Defines a convenience function to build a language model request processor.
References
NONE
build_lm_request_processor(model_id, credentials=None, config=None, system_template='', user_template='', output_parser_type='none', output_parser=None)
Build a language model invoker based on the provided configurations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id |
str | ModelId
|
The model id, can either be a ModelId instance or a string in the following format:
1. For |
required |
credentials |
str | dict[str, Any] | None
|
The credentials for the language model. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently supported for Bedrock and LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate environment variables. |
None
|
config |
dict[str, Any] | None
|
Additional configuration for the language model. Defaults to None. |
None
|
system_template |
str
|
The system prompt template. May contain placeholders enclosed in curly braces |
''
|
user_template |
str
|
The user prompt template. May contain placeholders enclosed in curly braces |
''
|
output_parser_type |
str
|
The type of output parser to use. Supports "json" and "none". Defaults to "none". |
'none'
|
output_parser |
BaseOutputParser | None
|
Deprecated parameter to pass an output parser. Will be removed in v0.5.0. Defaults to None. |
None
|
Returns:
Name | Type | Description |
---|---|---|
LMRequestProcessor |
LMRequestProcessor
|
The initialized language model request processor. |
Raises:
Type | Description |
---|---|
ValueError
|
If the provided configuration is invalid. |
Usage examples
# Basic usage
lm_request_processor = build_lm_request_processor(
model_id="openai/gpt-4o-mini",
credentials="sk-...",
user_template="{query}",
)
With custom LM invoker configuration
config = {
"default_hyperparameters": {"temperature": 0.5},
"tools": [tool_1, tool_2],
}
lm_request_processor = build_lm_request_processor(
model_id="openai/gpt-4o-mini",
credentials="sk-...",
config=config,
user_template="{query}",
)
With system template
lm_request_processor = build_lm_request_processor(
model_id="openai/gpt-4o-mini",
credentials="sk-...",
system_template="Talk like a pirate.",
user_template="{query}",
)
With output parser
lm_request_processor = build_lm_request_processor(
model_id="openai/gpt-4o-mini",
credentials="sk-...",
user_template="{query}",
output_parser_type="json",
)
Security warning
Please provide the LM invoker credentials ONLY to the credentials
parameter. Do not put any kind of
credentials in the config
parameter as the content of the config
parameter will be logged.
build_output_parser
Defines a convenience function to build an output parser.
References
NONE
build_output_parser(output_parser_type)
Build an output parser based on the provided configurations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
output_parser_type |
str
|
The type of output parser to use. Supports "json" and "none". |
required |
Returns:
Name | Type | Description |
---|---|---|
BaseOutputParser |
BaseOutputParser | None
|
The initialized output parser. |
Raises:
Type | Description |
---|---|
ValueError
|
If the provided type is not supported. |
Usage examples
Using JSON output parser
output_parser = build_output_parser(output_parser_type="json")
Not using output parser
output_parser = build_output_parser(output_parser_type="none")
model_id
Retains backward compatibility with the old import path.
This module is kept for backward compatibility. Please use the new import path: from gllm_inference.schema import ModelId, ModelProvider
Will be removed in v0.5.0.
References
NONE
ModelId
Bases: ModelId
Deprecated: Use gllm_inference.schema.ModelId instead.
from_string(*args, **kwargs)
classmethod
Deprecated: Use gllm_inference.schema.ModelId.from_string instead.