Schema
Modules concerning the schema of GLLM Inference modules.
Attachment
Bases: BaseModel
Defines a file attachment schema.
Attributes:
Name | Type | Description |
---|---|---|
data |
bytes
|
The content data of the file attachment. |
filename |
str
|
The filename of the file attachment. |
mime_type |
str
|
The mime type of the file attachment. |
url |
str | None
|
The URL of the file attachment. Defaults to None. |
__repr__()
Return string representation of the Attachment.
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The string representation of the Attachment. |
__str__()
Return string representation of the Attachment.
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The string representation of the Attachment. |
from_bytes(bytes, filename=None)
classmethod
Creates an Attachment from bytes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bytes |
bytes
|
The bytes of the file. |
required |
filename |
str | None
|
The filename of the file. Defaults to None, in which case the filename will be derived from the mime type. |
None
|
Returns:
Name | Type | Description |
---|---|---|
Attachment |
Attachment
|
The instantiated Attachment. |
from_data_url(data_url, filename=None)
classmethod
Creates an Attachment from a data URL (data:[mime/type];base64,[bytes]).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_url |
str
|
The data URL of the file. |
required |
filename |
str | None
|
The filename of the file. Defaults to None, in which case the filename will be derived from the mime type. |
None
|
Returns:
Name | Type | Description |
---|---|---|
Attachment |
Attachment
|
The instantiated Attachment. |
from_path(path, filename=None)
classmethod
Creates an Attachment from a path.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
str
|
The path to the file. |
required |
filename |
str | None
|
The filename of the file. Defaults to None, in which case the filename will be derived from the path. |
None
|
Returns:
Name | Type | Description |
---|---|---|
Attachment |
Attachment
|
The instantiated Attachment. |
from_url(url, filename=None)
classmethod
Creates an Attachment from a URL.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
url |
str
|
The URL of the file. |
required |
filename |
str | None
|
The filename of the file. Defaults to None, in which case the filename will be derived from the URL. |
None
|
Returns:
Name | Type | Description |
---|---|---|
Attachment |
Attachment
|
The instantiated Attachment. |
write_to_file(path=None)
Writes the Attachment to a file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
str | None
|
The path to the file. Defaults to None, in which case the filename will be used as the path. |
None
|
AttachmentType
Bases: StrEnum
Defines valid attachment types.
CodeExecResult
Bases: BaseModel
Defines a code execution result when a language model is configured to execute code.
Attributes:
Name | Type | Description |
---|---|---|
id |
str
|
The ID of the code execution. Defaults to an empty string. |
code |
str
|
The executed code. Defaults to an empty string. |
output |
list[str | Attachment]
|
The output of the executed code. Defaults to an empty list. |
ContentPlaceholder
Bases: BaseModel
Defines a content placeholder schema.
The ContentPlaceholder
represents a lazy-loaded content to be sent to the language model.
The content must be converted into a supported content type before being sent to the language model.
Attributes:
Name | Type | Description |
---|---|---|
type |
str
|
The type of the content placeholder. |
metadata |
dict[str, Any]
|
The metadata of the content placeholder. |
EmitDataType
Bases: StrEnum
Defines valid data types for emitting events.
LMOutput
Bases: BaseModel
Defines the output of a language model.
Attributes:
Name | Type | Description |
---|---|---|
response |
str
|
The text response. Defaults to an empty string. |
tool_calls |
list[ToolCall]
|
The tool calls, if the language model decides to invoke tools. Defaults to an empty list. |
structured_output |
dict[str, Any] | BaseModel | None
|
The structured output, if a response schema is defined for the language model. Defaults to None. |
token_usage |
TokenUsage | None
|
The token usage analytics, if requested. Defaults to None. |
duration |
float | None
|
The duration of the invocation in seconds, if requested. Defaults to None. |
finish_details |
dict[str, Any]
|
The details about how the generation finished, if requested. Defaults to an empty dictionary. |
reasoning |
list[Reasoning]
|
The reasoning, if the language model is configured to output reasoning. Defaults to an empty list. |
citations |
list[Chunk]
|
The citations, if the language model outputs citations. Defaults to an empty list. |
code_exec_results |
list[CodeExecResult]
|
The code execution results, if the language model decides to execute code. Defaults to an empty list. |
__repr__()
Return string representation of the LMOutput.
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The string representation of the LMOutput. |
__str__()
Return string representation of the LMOutput.
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The string representation of the LMOutput. |
ModelId
Bases: BaseModel
Defines a representation of a valid model id.
Attributes:
Name | Type | Description |
---|---|---|
provider |
ModelProvider
|
The provider of the model. |
name |
str | None
|
The name of the model. |
path |
str | None
|
The path of the model. |
Provider-specific examples
Using Anthropic
model_id = ModelId.from_string("anthropic/claude-3-5-sonnet-latest")
Using Bedrock
model_id = ModelId.from_string("bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0")
Using Datasaur
model_id = ModelId.from_string("datasaur/https://deployment.datasaur.ai/api/deployment/teamId/deploymentId/")
Using Google
model_id = ModelId.from_string("google/gemini-1.5-flash")
Using OpenAI
model_id = ModelId.from_string("openai/gpt-4o-mini")
Using Azure OpenAI
model_id = ModelId.from_string("azure-openai/https://my-resource.openai.azure.com:my-deployment")
Using OpenAI compatible endpoints (e.g. Groq)
model_id = ModelId.from_string("openai-compatible/https://api.groq.com/openai/v1:llama3-8b-8192")
Using Voyage
model_id = ModelId.from_string("voyage/voyage-3.5-lite")
Using TwelveLabs
model_id = ModelId.from_string("twelvelabs/Marengo-retrieval-2.7")
Using LangChain
model_id = ModelId.from_string("langchain/langchain_openai.ChatOpenAI:gpt-4o-mini")
For the list of supported providers, please refer to the following table: https://python.langchain.com/docs/integrations/chat/#featured-providers
Using LiteLLM
model_id = ModelId.from_string("litellm/openai/gpt-4o-mini")
For the list of supported providers, please refer to the following page: https://docs.litellm.ai/docs/providers/
Custom model name validation example
validation_map = {
ModelProvider.ANTHROPIC: {"claude-3-5-sonnet-latest"},
ModelProvider.GOOGLE: {"gemini-1.5-flash", "gemini-1.5-pro"},
ModelProvider.OPENAI: {"gpt-4o", "gpt-4o-mini"},
model_id = ModelId.from_string("...", validation_map)
from_string(model_id, validation_map=None)
classmethod
Parse a model id string into a ModelId object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id |
str
|
The model id to parse. Must be in the the following format:
1. For |
required |
validation_map |
dict[str, set[str]] | None
|
An optional dictionary that maps provider names to sets of valid model names. For the defined model providers, the model names will be validated against the set of valid model names. For the undefined model providers, the model name will not be validated. Defaults to None. |
None
|
Returns:
Name | Type | Description |
---|---|---|
ModelId |
ModelId
|
The parsed ModelId object. |
Raises:
Type | Description |
---|---|
ValueError
|
If the provided model id is invalid or if the model name is not valid for the provider. |
to_string()
Convert the ModelId object to a string.
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The string representation of the ModelId object. The format is as follows:
1. For |
ModelProvider
Bases: StrEnum
Defines the supported model providers.
PromptRole
Bases: StrEnum
Defines valid prompt roles.
Reasoning
Bases: BaseModel
Defines a reasoning output when a language model is configured to use reasoning.
Attributes:
Name | Type | Description |
---|---|---|
id |
str
|
The ID of the reasoning output. Defaults to an empty string. |
reasoning |
str
|
The reasoning text. Defaults to an empty string. |
type |
str
|
The type of the reasoning output. Defaults to an empty string. |
data |
str
|
The additional data of the reasoning output. Defaults to an empty string. |
TokenUsage
Bases: BaseModel
Defines the token usage data structure of a language model.
Attributes:
Name | Type | Description |
---|---|---|
input_tokens |
int
|
The number of input tokens. |
output_tokens |
int
|
The number of output tokens. |
ToolCall
Bases: BaseModel
Defines a tool call request when a language model decides to invoke a tool.
Attributes:
Name | Type | Description |
---|---|---|
id |
str
|
The ID of the tool call. |
name |
str
|
The name of the tool. |
args |
dict[str, Any]
|
The arguments of the tool call. |
ToolResult
Bases: BaseModel
Defines a tool result to be sent back to the language model.
Attributes:
Name | Type | Description |
---|---|---|
id |
str
|
The ID of the tool call. |
output |
str
|
The output of the tool call. |