Skip to content

Openai chat completions lm invoker

Defines a module to interact with OpenAI language models using the Chat Completions API.

OpenAIChatCompletionsLMInvoker(model_name, api_key=None, base_url=OPENAI_DEFAULT_URL, model_kwargs=None, default_hyperparameters=None, tools=None, response_schema=None, output_analytics=False, retry_config=None, thinking=False, input_transformer=InputTransformerType.IDENTITY, output_transformer=OutputTransformerType.IDENTITY)

Bases: StreamingBufferMixin, BaseLMInvoker

A language model invoker to interact with OpenAI language models using the Chat Completions API.

This class provides support for OpenAI's Chat Completions API schema. Use this class only when you have a specific reason to use the Chat Completions API over the Responses API, as OpenAI recommends using the Responses API whenever possible. The Responses API schema is supported through the OpenAILMInvoker class.

Examples:

lm_invoker = OpenAIChatCompletionsLMInvoker(model_name="gpt-5-nano")
result = await lm_invoker.invoke("Hi there!")
Supported features
  1. Basic invocation
  2. Streaming output
  3. Message roles
  4. Multimodal input a. Text b. Audio c. Document d. Image
  5. Structured output
  6. Tool calling
  7. Thinking
  8. Output analytics
  9. Retry and timeout
  10. Extra capabilities a. Input transformer b. Output transformer

For full documentation and examples, please refer to the LM Invoker tutorial: https://gdplabs.gitbook.io/sdk/gen-ai-sdk/tutorials/inference/lm-invoker

OpenAI compatible endpoints

This class can interact with endpoints compatible with OpenAI's Chat Completions API schema. This includes but are not limited to: 1. DeepInfra (https://deepinfra.com/) 2. DeepSeek (https://deepseek.com/) 3. Groq (https://groq.com/) 4. OpenRouter (https://openrouter.ai/) 5. Text Generation Inference (https://github.com/huggingface/text-generation-inference) 6. Together.ai (https://together.ai/) 7. vLLM (https://vllm.ai/) To do this, simply set base_url to the endpoint URL. Supported features may vary between endpoints. Unsupported features will result in an error.

Attributes:

Name Type Description
model_id str

The model ID of the language model.

model_provider str

The provider of the language model.

model_name str

The name of the language model.

client_kwargs dict[str, Any]

The keyword arguments for the OpenAI client.

default_config dict[str, Any]

Default config for invoking the model.

tools list[Tool]

The list of tools provided to the model to enable tool calling.

response_schema ResponseSchema | None

The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary.

thinking ThinkingConfig

The thinking configuration for the language model.

output_analytics bool

Whether to output the invocation analytics.

retry_config RetryConfig | None

The retry configuration for the language model.

input_transformer InputTransformerType

The type of input transformer to use.

output_transformer OutputTransformerType

The type of output transformer to use.

Initializes a new instance of the OpenAIChatCompletionsLMInvoker class.

Parameters:

Name Type Description Default
model_name str

The name of the OpenAI model.

required
api_key str | None

The API key for authenticating with OpenAI. Defaults to None, in which case the OPENAI_API_KEY environment variable will be used. If the endpoint does not require an API key, a dummy value can be passed (e.g. "").

None
base_url str

The base URL of a custom endpoint that is compatible with OpenAI's Chat Completions API schema. Defaults to OpenAI's default URL.

OPENAI_DEFAULT_URL
model_kwargs dict[str, Any] | None

Additional model parameters. Defaults to None.

None
default_hyperparameters dict[str, Any] | None

Default hyperparameters for invoking the model. Defaults to None.

None
tools list[LMTool] | None

Tools provided to the model to enable tool calling. Defaults to None.

None
response_schema ResponseSchema | None

The schema of the response. If provided, the model will output a structured response as defined by the schema. Supports both Pydantic BaseModel and JSON schema dictionary. Defaults to None.

None
output_analytics bool

Whether to output the invocation analytics. Defaults to False.

False
retry_config RetryConfig | None

The retry configuration for the language model. Defaults to None, in which case a default config with no retry and 30.0 seconds timeout will be used.

None
thinking bool | ThinkingConfig

A boolean or ThinkingConfig object to configure thinking. Defaults to False.

False
input_transformer InputTransformerType

The type of input transformer to use. Defaults to InputTransformerType.IDENTITY, which returns the input without transformation.

IDENTITY
output_transformer OutputTransformerType

The type of output transformer to use. Defaults to OutputTransformerType.IDENTITY, which returns the output without transformation.

IDENTITY