Lm request processor
Defines a request processor to perform language models inference.
References
NONE
LMRequestProcessor(prompt_builder, lm_invoker, output_parser=None)
A request processor to perform language models inference.
The LMRequestProcessor
class handles the process of building a prompt, invoking a language model, and optionally
parsing the output. It combines a prompt builder, language model invoker, and an optional output parser to manage
the inference process in Gen AI applications.
Attributes:
Name | Type | Description |
---|---|---|
prompt_builder |
PromptBuilder | BasePromptBuilder | MultimodalPromptBuilder
|
The prompt builder used to format the prompt. |
lm_invoker |
BaseLMInvoker | BaseMultimodalLMInvoker
|
The language model invoker that handles the model inference. |
output_parser |
BaseOutputParser | None
|
The optional parser to process the model's output, if any. |
tool_dict |
dict[str, Tool]
|
A dictionary of tools provided to the language model to enable tool calling, if any. The dictionary maps the tool name to the tools themselves. |
Initializes a new instance of the LMRequestProcessor class.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt_builder |
PromptBuilder | BasePromptBuilder | MultimodalPromptBuilder
|
The prompt builder used to format the prompt. |
required |
lm_invoker |
BaseLMInvoker | BaseMultimodalLMInvoker
|
The language model invoker that handles the model inference. |
required |
output_parser |
BaseOutputParser
|
An optional parser to process the model's output. Defaults to None. |
None
|
WARNING: Support for MultimodalPromptBuilder is deprecated and will be removed in version 0.5.0.
clear_response_schema()
Clears the response schema for the LM invoker.
This method clears the response schema for the LM invoker.
clear_tools()
Clears the tools for the LM invoker.
This method clears the tools for the LM invoker.
process(prompt_kwargs, history=None, extra_contents=None, hyperparameters=None, event_emitter=None, attachments=None, system_multimodal_contents=None, user_multimodal_contents=None, auto_execute_tools=True, max_lm_calls=5)
async
Processes a language model inference request.
This method processes the language model inference request as follows:
1. Assembling the prompt using the provided keyword arguments.
2. Invoking the language model with the assembled prompt and optional hyperparameters.
3. If auto_execute_tools
is True, the method will automatically execute tools if the LM output includes
tool calls.
4. Optionally parsing the model's output using the output parser if provided. If the model output is an
LMOutput object, the output parser will process the response
attribute of the LMOutput object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt_kwargs |
dict[str, Any]
|
A dictionary of arguments used to format the prompt. |
required |
history |
MultimodalPrompt | None
|
A list of conversation history to be included in the prompt. Defaults to None. |
None
|
extra_contents |
list[MultimodalContent] | None
|
A list of extra contents to be included in the user message. Defaults to None. |
None
|
hyperparameters |
dict[str, Any] | None
|
A dictionary of hyperparameters for the model invocation. Defaults to None. |
None
|
event_emitter |
EventEmitter | None
|
An event emitter for streaming model outputs. Defaults to None. |
None
|
attachments |
list[Attachment] | None
|
Deprecated parameter to handle attachments. Will be removed in v0.5.0. Defaults to None. |
None
|
system_multimodal_contents |
list[Any] | None
|
Deprecated parameter to handle attachments. Will be removed in v0.5.0. Defaults to None. |
None
|
user_multimodal_contents |
list[Any] | None
|
Deprecated parameter to handle attachments. Will be removed in v0.5.0. Defaults to None. |
None
|
auto_execute_tools |
bool
|
Whether to automatically execute tools if the LM invokes output tool calls. Defaults to True. |
True
|
max_lm_calls |
int
|
The maximum number of times the language model can be invoked
when |
5
|
Returns:
Name | Type | Description |
---|---|---|
Any |
Any
|
The result of the language model invocation, optionally parsed by the output parser. |
set_response_schema(response_schema)
Sets the response schema for the LM invoker.
This method sets the response schema for the LM invoker. Any existing response schema will be replaced.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
response_schema |
ResponseSchema | None
|
The response schema to be used. |
required |
set_tools(tools)
Sets the tools for the LM invoker.
This method sets the tools for the LM invoker. Any existing tools will be replaced.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tools |
list[Tool]
|
The list of tools to be used. |
required |