Skip to content

Google lm invoker

Defines a module to interact with Google language models.

Authors

Henry Wicaksono (henry.wicaksono@gdplabs.id)

References

[1] https://googleapis.github.io/python-genai

GoogleLMInvoker(model_name, api_key=None, credentials_path=None, project_id=None, location='us-central1', model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, thinking=None, thinking_budget=DEFAULT_THINKING_BUDGET, bind_tools_params=None, with_structured_output_params=None)

Bases: BaseLMInvoker

A language model invoker to interact with Google language models.

Attributes:

Name Type Description
model_id str

The model ID of the language model.

model_provider str

The provider of the language model.

model_name str

The name of the language model.

client_params dict[str, Any]

The Google client instance init parameters.

default_hyperparameters dict[str, Any]

Default hyperparameters for invoking the model.

tools list[Any]

The list of tools provided to the model to enable tool calling.

response_schema ResponseSchema | None

The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary.

output_analytics bool

Whether to output the invocation analytics.

retry_config RetryConfig | None

The retry configuration for the language model.

thinking bool

Whether to enable thinking. Only allowed for thinking models.

thinking_budget int

The tokens allowed for thinking process. Only allowed for thinking models. If set to -1, the model will control the budget automatically.

Basic usage

The GoogleLMInvoker can be used as follows:

lm_invoker = GoogleLMInvoker(model_name="gemini-2.5-flash")
result = await lm_invoker.invoke("Hi there!")
Authentication

The GoogleLMInvoker can use either Google Gen AI or Google Vertex AI.

Google Gen AI is recommended for quick prototyping and development. It requires a Gemini API key for authentication.

Usage example:

lm_invoker = GoogleLMInvoker(
    model_name="gemini-2.5-flash",
    api_key="your_api_key"
)

Google Vertex AI is recommended to build production-ready applications. It requires a service account JSON file for authentication.

Usage example:

lm_invoker = GoogleLMInvoker(
    model_name="gemini-2.5-flash",
    credentials_path="path/to/service_account.json"
)

If neither api_key nor credentials_path is provided, Google Gen AI will be used by default. The GOOGLE_API_KEY environment variable will be used for authentication.

Input types
  1. Text.
  2. Audio: ".aac", ".flac", ".mp3", and ".wav".
  3. Document: ".pdf", ".txt", ".csv", ".md", ".css", ".html", and ".xml".
  4. Image: ".jpg", ".jpeg", ".png", and ".webp".
  5. Video: ".x-flv", ".mpeg", ".mpg", ".mp4", ".webm", ".wmv", and ".3gpp". Non-text inputs must be of valid file extensions and can be passed as an Attachment object.

Non-text inputs can be passed with either the user or assistant role.

Usage example:

text = "What animal is in this image?"
image = Attachment.from_path("path/to/local/image.png")

prompt = [(PromptRole.USER, [text, image])]
result = await lm_invoker.invoke(prompt)
Tool calling

Tool calling is a feature that allows the language model to call tools to perform tasks. Tools can be passed to the via the tools parameter as a list of LangChain's Tool objects. When tools are provided and the model decides to call a tool, the tool calls are stored in the tool_calls attribute in the output.

Usage example:

lm_invoker = GoogleLMInvoker(..., tools=[tool_1, tool_2])

Output example:

LMOutput(
    response="Let me call the tools...",
    tool_calls=[
        ToolCall(id="123", name="tool_1", args={"key": "value"}),
        ToolCall(id="456", name="tool_2", args={"key": "value"}),
    ]
)
Structured output

Structured output is a feature that allows the language model to output a structured response. This feature can be enabled by providing a schema to the response_schema parameter.

The schema must be either a JSON schema dictionary or a Pydantic BaseModel class. If JSON schema is used, it must be compatible with Pydantic's JSON schema, especially for complex schemas. For this reason, it is recommended to create the JSON schema using Pydantic's model_json_schema method.

Structured output is not compatible with tool calling. The language model also doesn't need to stream anything when structured output is enabled. Thus, standard invocation will be performed regardless of whether the event_emitter parameter is provided or not.

When enabled, the structured output is stored in the structured_output attribute in the output. 1. If the schema is a JSON schema dictionary, the structured output is a dictionary. 2. If the schema is a Pydantic BaseModel class, the structured output is a Pydantic model.

Example 1: Using a JSON schema dictionary

Usage example:

schema = {
    "title": "Animal",
    "description": "A description of an animal.",
    "properties": {
        "color": {"title": "Color", "type": "string"},
        "name": {"title": "Name", "type": "string"},
    },
    "required": ["name", "color"],
    "type": "object",
}
lm_invoker = GoogleLMInvoker(..., response_schema=schema)

Output example:

LMOutput(structured_output={"name": "Golden retriever", "color": "Golden"})

Example 2: Using a Pydantic BaseModel class

Usage example:

class Animal(BaseModel):
    name: str
    color: str

lm_invoker = GoogleLMInvoker(..., response_schema=Animal)

Output example:

LMOutput(structured_output=Animal(name="Golden retriever", color="Golden"))
Analytics tracking

Analytics tracking is a feature that allows the module to output additional information about the invocation. This feature can be enabled by setting the output_analytics parameter to True. When enabled, the following attributes will be stored in the output: 1. token_usage: The token usage. 2. duration: The duration in seconds. 3. finish_details: The details about how the generation finished.

Output example:

LMOutput(
    response="Golden retriever is a good dog breed.",
    token_usage=TokenUsage(input_tokens=100, output_tokens=50),
    duration=0.729,
    finish_details={"finish_reason": "STOP", "finish_message": None},
)
Retry and timeout

The GoogleLMInvoker supports retry and timeout configuration. By default, the max retries is set to 0 and the timeout is set to 30.0 seconds. They can be customized by providing a custom RetryConfig object to the retry_config parameter.

Retry config examples:

retry_config = RetryConfig(max_retries=0, timeout=0.0)  # No retry, no timeout
retry_config = RetryConfig(max_retries=0, timeout=10.0)  # No retry, 10.0 seconds timeout
retry_config = RetryConfig(max_retries=5, timeout=0.0)  # 5 max retries, no timeout
retry_config = RetryConfig(max_retries=5, timeout=10.0)  # 5 max retries, 10.0 seconds timeout

Usage example:

lm_invoker = GoogleLMInvoker(..., retry_config=retry_config)
Thinking

Thinking is a feature that allows the language model to have enhanced reasoning capabilities for complex tasks, while also providing transparency into its step-by-step thought process before it delivers its final answer. It can be enabled by setting the thinking parameter to True.

Thinking is only available for certain models, starting from Gemini 2.5 series, and is required for Gemini 2.5 Pro models. Therefore, thinking defaults to True for Gemini 2.5 Pro models and False for other models. Setting thinking to False for Gemini 2.5 Pro models will raise a ValueError. When enabled, the reasoning is stored in the reasoning attribute in the output.

Usage example:

lm_invoker = GoogleLMInvoker(..., thinking=True, thinking_budget=1024)

Output example:

LMOutput(
    response="Golden retriever is a good dog breed.",
    reasoning=[Reasoning(reasoning="Let me think about it...")],
)

When streaming is enabled, the thinking token will be streamed with the EventType.DATA event type.

Streaming output example:

{"type": "data", "value": '{"data_type": "thinking_start", "data_value": ""}', ...}
{"type": "data", "value": '{"data_type": "thinking", "data_value": "Let me think "}', ...}
{"type": "data", "value": '{"data_type": "thinking", "data_value": "about it..."}', ...}
{"type": "data", "value": '{"data_type": "thinking_end", "data_value": ""}', ...}
{"type": "response", "value": "Golden retriever ", ...}
{"type": "response", "value": "is a good dog breed.", ...}

When thinking is enabled, the amount of tokens allocated for the thinking process can be set via the thinking_budget parameter. The thinking_budget: 1. Defaults to -1, in which case the model will control the budget automatically. 2. Must be greater than the model's minimum thinking budget. For more details, please refer to https://ai.google.dev/gemini-api/docs/thinking

Output types

The output of the GoogleLMInvoker is of type MultimodalOutput, which is a type alias that can represent: 1. str: The text response if no additional output is needed. 2. LMOutput: A Pydantic model with the following attributes if any additional output is needed: 2.1. response (str): The text response. 2.2. tool_calls (list[ToolCall]): The tool calls, if the tools parameter is defined and the language model decides to invoke tools. Defaults to an empty list. 2.3. structured_output (dict[str, Any] | BaseModel | None): The structured output, if the response_schema parameter is defined. Defaults to None. 2.4. token_usage (TokenUsage | None): The token usage analytics, if the output_analytics parameter is set to True. Defaults to None. 2.5. duration (float | None): The duration of the invocation in seconds, if the output_analytics parameter is set to True. Defaults to None. 2.6. finish_details (dict[str, Any] | None): The details about how the generation finished, if the output_analytics parameter is set to True. Defaults to None. 2.7. reasoning (list[Reasoning]): The reasoning objects, if the thinking parameter is set to True. Defaults to an empty list. 2.8. citations (list[Chunk]): The citations. Currently not supported. Defaults to an empty list. 2.9. code_exec_results (list[CodeExecResult]): The code execution results. Currently not supported. Defaults to an empty list.

Initializes a new instance of the GoogleLMInvoker class.

Parameters:

Name Type Description Default
model_name str

The name of the model to use.

required
api_key str | None

Required for Google Gen AI authentication. Cannot be used together with credentials_path. Defaults to None.

None
credentials_path str | None

Required for Google Vertex AI authentication. Path to the service account credentials JSON file. Cannot be used together with api_key. Defaults to None.

None
project_id str | None

The Google Cloud project ID for Vertex AI. Only used when authenticating with credentials_path. Defaults to None, in which case it will be loaded from the credentials file.

None
location str

The location of the Google Cloud project for Vertex AI. Only used when authenticating with credentials_path. Defaults to "us-central1".

'us-central1'
model_kwargs dict[str, Any] | None

Additional keyword arguments for the Google Vertex AI client.

None
default_hyperparameters dict[str, Any] | None

Default hyperparameters for invoking the model. Defaults to None.

None
tools list[Tool] | None

Tools provided to the language model to enable tool calling. Defaults to None.

None
response_schema ResponseSchema | None

The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None.

None
output_analytics bool

Whether to output the invocation analytics. Defaults to False.

False
retry_config RetryConfig | None

The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout is used.

None
thinking bool | None

Whether to enable thinking. Only allowed for thinking models. Defaults to True for Gemini 2.5 Pro models and False for other models.

None
thinking_budget int

The tokens allowed for thinking process. Only allowed for thinking models. Defaults to -1, in which case the model will control the budget automatically.

DEFAULT_THINKING_BUDGET
bind_tools_params dict[str, Any] | None

Deprecated parameter to add tool calling capability. If provided, must at least include the tools key that is equivalent to the tools parameter. Retained for backward compatibility. Defaults to None.

None
with_structured_output_params dict[str, Any] | None

Deprecated parameter to instruct the model to produce output with a certain schema. If provided, must at least include the schema key that is equivalent to the response_schema parameter. Retained for backward compatibility. Defaults to None.

None
Note

If neither api_key nor credentials_path is provided, Google Gen AI will be used by default. The GOOGLE_API_KEY environment variable will be used for authentication.

set_response_schema(response_schema)

Sets the response schema for the Google language model.

This method sets the response schema for the Google language model. Any existing response schema will be replaced.

Parameters:

Name Type Description Default
response_schema ResponseSchema | None

The response schema to be used.

required

Raises:

Type Description
ValueError

If tools exists.

set_tools(tools)

Sets the tools for the Google language model.

This method sets the tools for the Google language model. Any existing tools will be replaced.

Parameters:

Name Type Description Default
tools list[Tool]

The list of tools to be used.

required

Raises:

Type Description
ValueError

If response_schema exists.