Skip to content

Response Synthesizer

Modules concerning the response synthesizers used in Gen AI applications.

StaticListResponseSynthesizer(response_prefix=DEFAULT_RESPONSE_PREFIX, fallback_response=DEFAULT_FALLBACK_RESPONSE, delimiter='\n', streamable=True)

Bases: BaseResponseSynthesizer

A response synthesizer that synthesizes a static list response.

The StaticListResponseSynthesizer class generates a response by formatting a list of context items. If no context is provided, it returns a fallback response. The response can be prefixed with a customizable string and is intended for use when a simple list-based response is required.

Attributes:

Name Type Description
response_prefix str

The string prefix that precedes the list of items.

fallback_response str

The fallback response if the context list is empty.

delimiter str

The delimiter to be placed in between context list elements.

streamable bool

A flag to indicate whether the synthesized response will be streamed if an event emitter is provided.

Initializes a new instance of the StaticListResponseSynthesizer class.

Parameters:

Name Type Description Default
response_prefix str

The string prefix that precedes the list of items. Defaults to DEFAULT_RESPONSE_PREFIX.

DEFAULT_RESPONSE_PREFIX
fallback_response str

The fallback response if the context list is empty. Defaults to DEFAULT_FALLBACK_RESPONSE.

DEFAULT_FALLBACK_RESPONSE
delimiter str

The delimiter to be placed in between context list elements. Defaults to "\n".

'\n'
streamable bool

A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True.

True

synthesize_response(query=None, state_variables=None, history=None, extra_contents=None, attachments=None, hyperparameters=None, event_emitter=None, system_multimodal_contents=None, user_multimodal_contents=None) async

Synthesizes a static list response based on the provided context_list.

This method generates a response using the items in the context_list. If the list is empty, it returns a fallback response. The list items are prefixed with a customizable preceding line and are numbered sequentially. If an event_emitter is provided, the response is emitted as an event.

Parameters:

Name Type Description Default
query str | None

The input query. Unused in this synthesizer. Defaults to None.

None
state_variables dict[str, Any] | None

A dictionary that must include a context_list key of type list. Defaults to None.

None
history list[tuple[PromptRole, str | list[Any]]] | None

The chat history of the conversation to be considered in generating the response. Unused in this synthesizer. Defaults to None.

None
extra_contents list[MultimodalContent] | None

A list of extra contents to be included when generating the response. Unused in this synthesizer. Defaults to None.

None
attachments list[Attachment] | None

Deprecated parameter to handle attachments. Will be removed in v0.5.0. Defaults to None.

None
hyperparameters dict[str, Any] | None

The hyperparameters to be passed to the language model. Unused in this synthesizer. Defaults to None.

None
event_emitter EventEmitter | None

The event emitter for handling events during response synthesis. Defaults to None.

None
system_multimodal_contents list[Any] | None

Deprecated parameter to handle attachments. Will be removed in v0.5.0. Unused in this synthesizer. Defaults to None.

None
user_multimodal_contents list[Any] | None

Deprecated parameter to handle attachments. Will be removed in v0.5.0. Unused in this synthesizer. Defaults to None.

None

Returns:

Name Type Description
str str

The synthesized list-based response or the fallback response.

Raises:

Type Description
ValueError

If context_list is missing or is not of type list.

StuffResponseSynthesizer(lm_request_processor, streamable=True, extractor_func=None)

Bases: BaseResponseSynthesizer, UsesLM

A response synthesizer that synthesizes response using the stuff technique.

The StuffResponseSynthesizer class implements the BaseResponseSynthesizer by using a language model request processor to generate a response based on the provided query. It employs the "stuff" technique, where the optional input query and other input variables passed through state_variables are processed to create the prompt for the language model and the response is generated in a single language model call.

Attributes:

Name Type Description
lm_request_processor LMRequestProcessor

The request processor used to handle the response generation.

streamable bool

A flag to indicate whether the synthesized response will be streamed if an event emitter is provided.

extractor_func Callable[[MultimodalOutput], Any]

A function to extract the the language model output.

Initializes a new instance of the StuffResponseSynthesizer class.

Parameters:

Name Type Description Default
lm_request_processor LMRequestProcessor

The request processor used to handle the response generation.

required
streamable bool

A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True.

True
extractor_func Callable[[MultimodalOutput], Any]

A function to extract the language model output. Defaults to None, in which case the default extractor function is used. The default extractor function extracts the response attribute from the language model output.

None

synthesize_response(query=None, state_variables=None, history=None, extra_contents=None, attachments=None, hyperparameters=None, event_emitter=None, system_multimodal_contents=None, user_multimodal_contents=None) async

Synthesizes the response using the provided query and state variables.

This method takes the input query and additional state_variables, integrates them into prompt_kwargs, and passes them to the LMRequestProcessor for processing. The synthesized response is then returned.

Parameters:

Name Type Description Default
query str | None

The input query for generating the response. Defaults to None.

None
state_variables dict[str, Any] | None

Additional state variables to include in the prompt. Defaults to None.

None
history list[tuple[PromptRole, str | list[Any]]] | None

The chat history of the conversation to be considered in generating the response. Defaults to None.

None
extra_contents list[MultimodalContent] | None

A list of extra contents to be included when generating the response. Defaults to None.

None
attachments list[Attachment] | None

Deprecated parameter to handle attachments. Will be removed in v0.5.0. Defaults to None.

None
hyperparameters dict[str, Any] | None

The hyperparameters to be passed to the language model. Defaults to None.

None
event_emitter EventEmitter | None

The event emitter for handling events during response synthesis. Defaults to None.

None
system_multimodal_contents list[Any] | None

Deprecated parameter to handle attachments. Will be removed in v0.5.0. Defaults to None.

None
user_multimodal_contents list[Any] | None

Deprecated parameter to handle attachments. Will be removed in v0.5.0. Defaults to None.

None

Returns:

Name Type Description
Any Any

The synthesized response from the language model.