Lm request processor
Defines a request processor to perform language models inference.
LMRequestProcessor(prompt_builder, lm_invoker)
A request processor to perform language models inference.
The LMRequestProcessor class handles the process of building a prompt and invoking a language model.
It combines a prompt builder and a language model invoker to manage the inference process in Gen AI applications.
Attributes:
| Name | Type | Description |
|---|---|---|
prompt_builder |
PromptBuilder
|
The prompt builder used to format the prompt. |
lm_invoker |
BaseLMInvoker
|
The language model invoker that handles the model inference. |
tool_dict |
dict[str, Tool]
|
A dictionary of tools provided to the language model to enable tool calling, if any. The dictionary maps the tool name to the tools themselves. |
Initializes a new instance of the LMRequestProcessor class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prompt_builder
|
PromptBuilder
|
The prompt builder used to format the prompt. |
required |
lm_invoker
|
BaseLMInvoker
|
The language model invoker that handles the model inference. |
required |
clear_response_schema()
Clears the response schema for the LM invoker.
This method clears the response schema for the LM invoker.
clear_tools()
Clears the tools for the LM invoker.
This method clears the tools for the LM invoker.
process(history=None, extra_contents=None, hyperparameters=None, event_emitter=None, auto_execute_tools=True, max_lm_calls=5, **kwargs)
async
Processes a language model inference request.
This method processes the language model inference request as follows:
1. Assembling the prompt using the provided keyword arguments.
2. Invoking the language model with the assembled prompt and optional hyperparameters.
3. If auto_execute_tools is True, the method will automatically execute tools if the LM output includes
tool calls.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
history
|
list[Message] | None
|
A list of conversation history to be included in the prompt. Defaults to None. |
None
|
extra_contents
|
list[MessageContent] | None
|
A list of extra contents to be included in the user message. Defaults to None. |
None
|
hyperparameters
|
dict[str, Any] | None
|
A dictionary of hyperparameters for the model invocation. Defaults to None. |
None
|
event_emitter
|
EventEmitter | None
|
An event emitter for streaming model outputs. Defaults to None. |
None
|
auto_execute_tools
|
bool
|
Whether to automatically execute tools if the LM invokes output tool calls. Defaults to True. |
True
|
max_lm_calls
|
int
|
The maximum number of times the language model can be invoked
when |
5
|
**kwargs
|
Any
|
Keyword arguments that will be passed to format the prompt builder.
Values must be either a string or an object that can be serialized to a string.
Reserved keyword arguments that cannot be passed to the prompt builder include:
1. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
LMOutput |
LMOutput
|
The result of the language model invocation. |
set_response_schema(response_schema)
Sets the response schema for the LM invoker.
This method sets the response schema for the LM invoker. Any existing response schema will be replaced.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
response_schema
|
ResponseSchema | None
|
The response schema to be used. |
required |
set_tools(tools)
Sets the tools for the LM invoker.
This method sets the tools for the LM invoker. Any existing tools will be replaced.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tools
|
list[Tool]
|
The list of tools to be used. |
required |