Response Synthesizer
Modules concerning the response synthesizers used in Gen AI applications.
ResponseSynthesizer(strategy, streamable=True)
Bases: Component
A response synthesizer that uses a strategy to synthesize the response.
Attributes:
| Name | Type | Description |
|---|---|---|
strategy |
BaseSynthesisStrategy
|
The strategy used to synthesize the response. |
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. |
The ResponseSynthesizer class provides a unified interface for synthesizing the response
using different strategies:
Stuff strategy
This strategy utilize a language model to synthesize a response based on the provided inputs. It employs the "stuff" technique, where all the provided chunks are stuffed into the prompt altogether. This prompt is then passed to the language model to synthesize a response in a single language model call.
Usage example:
python
lm_request_processor = build_lm_request_processor(...)
synthesizer = ResponseSynthesizer.stuff(lm_request_processor=lm_request_processor)
response = await synthesizer.synthesize(query=query, chunks=chunks)
The stuff strategy can also be instantiated using a preset prompt template.
This allows ease of usage as only the model id is required:
python
synthesizer = ResponseSynthesizer.stuff_preset(model_id="openai/gpt-4.1-nano")
response = await synthesizer.synthesize(query=query, chunks=chunks)
Map-reduce strategy
This strategy implements a two-phase approach for processing large amounts of content. In the map phase, each chunk is processed individually to generate intermediate summaries. In the reduce phase, all intermediate summaries are combined into a final response. This approach is useful when dealing with large amounts of content that need to be processed efficiently.
Usage example:
python
map_processor = build_lm_request_processor(...)
reduce_processor = build_lm_request_processor(...)
synthesizer = ResponseSynthesizer.mapreduce(
lm_request_processor=map_processor,
reduce_lm_request_processor=reduce_processor
)
response = await synthesizer.synthesize(query=query, chunks=chunks)
The map-reduce strategy can also be instantiated using preset prompt templates:
python
synthesizer = ResponseSynthesizer.mapreduce_preset(model_id="openai/gpt-4.1-nano")
response = await synthesizer.synthesize(query=query, chunks=chunks)
Refine strategy
This strategy utilize a language model to iteratively refine a response based on multiple contexts. It processes contexts in batches, where each iteration refines the previous answer with new context(s). This approach is useful when dealing with large amounts of context that need to be processed incrementally.
Usage example:
python
lm_request_processor = build_lm_request_processor(...)
synthesizer = ResponseSynthesizer.refine(lm_request_processor=lm_request_processor, batch_size=2)
response = await synthesizer.synthesize(query=query, chunks=chunks)
The refine strategy can also be instantiated using a preset prompt template.
This allows ease of usage as only the model id is required:
python
synthesizer = ResponseSynthesizer.refine_preset(model_id="openai/gpt-4.1-nano", batch_size=2)
response = await synthesizer.synthesize(query=query, chunks=chunks)
Static list strategy
This strategy generates a response by formatting a list of context items. This strategy can be used when a response should be presented as a simple list without requiring language model processing. The response format is customizable by providing a function that formats the list of context as a response.
Usage example:
python
synthesizer = ResponseSynthesizer.static_list()
response = await synthesizer.synthesize(chunks=chunks)
Initializes a new instance of the BaseResponseSynthesizer class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
strategy |
BaseSynthesisStrategy
|
The strategy used to synthesize the response. |
required |
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True. |
True
|
map_reduce(map_lm_request_processor, reduce_lm_request_processor=None, chunks_repacker=None, batch_size=1, extractor_func=None, streamable=True)
classmethod
Creates a response synthesizer with the map-reduce strategy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
map_lm_request_processor |
LMRequestProcessor
|
The request processor for the map phase. |
required |
reduce_lm_request_processor |
LMRequestProcessor | None
|
The request processor for the reduce phase. Defaults to None, in which case the map processor is used for both phases. |
None
|
chunks_repacker |
Repacker | None
|
The repacker used to repack chunks during the reduce phase. Defaults to None, in which case a repacker with mode "context" is used. |
None
|
batch_size |
int
|
The number of chunks to include in each map step. Defaults to 1. |
1
|
extractor_func |
Callable[[str | LMOutput], Any] | None
|
A function to extract the language model output. Defaults to None, in which case the default extractor function is used. The default extractor function extracts the response attribute from the language model output. |
None
|
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True. |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
ResponseSynthesizer |
ResponseSynthesizer
|
A response synthesizer with the map-reduce strategy. |
map_reduce_preset(map_model_id, reduce_model_id, map_credentials=None, reduce_credentials=None, map_config=None, reduce_config=None, map_system_template=PRESET_PROMPT_CATALOG.map.system_template, reduce_system_template=PRESET_PROMPT_CATALOG.reduce.system_template, map_user_template=PRESET_PROMPT_CATALOG.map.user_template, reduce_user_template=PRESET_PROMPT_CATALOG.reduce.user_template, map_key_defaults=None, reduce_key_defaults=None, map_output_parser_type='none', reduce_output_parser_type='none', batch_size=1, chunks_repacker=None, extractor_func=None, streamable=True)
classmethod
Creates a response synthesizer with the preset map-reduce strategy.
This method creates a response synthesizer with the map-reduce strategy using the provided preset prompt templates. This enables flexible usage with separate models for map and reduce phases.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
map_model_id |
str | ModelId
|
The model id for the map phase, can either be a ModelId instance or a string in a format defined in the following page: https://gdplabs.gitbook.io/sdk/resources/supported-models#language-models-lms |
required |
reduce_model_id |
str | ModelId
|
The model id for the reduce phase, can either be a ModelId instance or a string in a format defined in the following page: https://gdplabs.gitbook.io/sdk/resources/supported-models#language-models-lms |
required |
map_credentials |
str | dict[str, Any] | None
|
The credentials for the map phase. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently supported for Bedrock and LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate env variables. |
None
|
reduce_credentials |
str | dict[str, Any] | None
|
The credentials for the reduce phase. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently supported for Bedrock and LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate env variables. |
None
|
map_config |
dict[str, Any] | None
|
Additional configuration for the map phase. Defaults to None. |
None
|
reduce_config |
dict[str, Any] | None
|
Additional configuration for the reduce phase. Defaults to None. |
None
|
map_system_template |
str
|
The system prompt template for the map phase. May contain placeholders enclosed
in curly braces |
system_template
|
reduce_system_template |
str
|
The system prompt template for the reduce phase. May contain placeholders
enclosed in curly braces |
system_template
|
map_user_template |
str
|
The user prompt template for the map phase. May contain placeholders enclosed in
curly braces |
user_template
|
reduce_user_template |
str
|
The user prompt template for the reduce phase. May contain placeholders
enclosed in curly braces |
user_template
|
map_key_defaults |
dict[str, str] | None
|
Default values for the keys in the map phase prompt templates. Applied when the corresponding keys are not provided in the runtime input. Defaults to None, in which case no default values will be assigned to the keys. |
None
|
reduce_key_defaults |
dict[str, str] | None
|
Default values for the keys in the reduce phase prompt templates. Applied when the corresponding keys are not provided in the runtime input. Defaults to None, in which case no default values will be assigned to the keys. |
None
|
map_output_parser_type |
str
|
The type of output parser to use for the map phase. Supports "json" and "none". Defaults to "none". |
'none'
|
reduce_output_parser_type |
str
|
The type of output parser to use for the reduce phase. Supports "json" and "none". Defaults to "none". |
'none'
|
batch_size |
int
|
The number of chunks to include in each map step. Defaults to 1. |
1
|
chunks_repacker |
Repacker | None
|
The repacker used to repack chunks during the reduce phase. Defaults to None, in which case a repacker with mode "context" is used. |
None
|
extractor_func |
Callable[[str | LMOutput], Any] | None
|
A function to extract the language model output. Defaults to None, in which case the default extractor function is used. The default extractor function extracts the response attribute from the language model output. |
None
|
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True. |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
ResponseSynthesizer |
ResponseSynthesizer
|
A response synthesizer with the map-reduce strategy. |
refine(lm_request_processor, batch_size=1, extractor_func=None, streamable=True)
classmethod
Creates a response synthesizer with the refine strategy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
lm_request_processor |
LMRequestProcessor
|
The request processor used to handle the response generation. |
required |
batch_size |
int
|
The number of chunks to include in each step. Defaults to 1. |
1
|
extractor_func |
Callable[[str | LMOutput], Any] | None
|
A function to extract the language model output. Defaults to None, in which case the default extractor function is used. The default extractor function extracts the response attribute from the language model output. |
None
|
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True. |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
ResponseSynthesizer |
ResponseSynthesizer
|
A response synthesizer with the refine strategy. |
refine_preset(model_id, credentials=None, config=None, system_template=PRESET_PROMPT_CATALOG.refine.system_template, user_template=PRESET_PROMPT_CATALOG.refine.user_template, key_defaults=None, output_parser_type='none', batch_size=1, extractor_func=None, streamable=True)
classmethod
Creates a response synthesizer with the preset refine strategy.
This method creates a response synthesizer with the refine strategy using the provided preset prompt templates. This enables simple usage as only the model id is required.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id |
str | ModelId
|
The model id, can either be a ModelId instance or a string in a format defined in the following page: https://gdplabs.gitbook.io/sdk/resources/supported-models#language-models-lms |
required |
credentials |
str | dict[str, Any] | None
|
The credentials for the language model. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently supported for Bedrock and LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate env variables. |
None
|
config |
dict[str, Any] | None
|
Additional configuration for the language model. Defaults to None. |
None
|
system_template |
str
|
The system prompt template. May contain placeholders enclosed in curly braces |
system_template
|
user_template |
str | None
|
The user prompt template. May contain placeholders enclosed in curly
braces |
user_template
|
key_defaults |
dict[str, str] | None
|
Default values for the keys in the prompt templates. Applied when the corresponding keys are not provided in the runtime input. Defaults to None, in which case no default values will be assigned to the keys. |
None
|
output_parser_type |
str
|
The type of output parser to use. Supports "json" and "none". Defaults to "none". |
'none'
|
batch_size |
int
|
The number of chunks to include in each step. Defaults to 1. |
1
|
extractor_func |
Callable[[str | LMOutput], Any] | None
|
A function to extract the language model output. Defaults to None, in which case the default extractor function is used. The default extractor function extracts the response attribute from the language model output. |
None
|
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True. |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
ResponseSynthesizer |
ResponseSynthesizer
|
A response synthesizer with the refine strategy. |
static_list(format_response_func=None, streamable=True)
classmethod
Creates a response synthesizer with the static list strategy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format_response_func |
Callable[[list[str]], str] | None
|
A function that formats a list of context as a response. Defaults to None, in which case the default formatter function will be used. |
None
|
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True. |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
ResponseSynthesizer |
ResponseSynthesizer
|
A response synthesizer with the static list strategy. |
stuff(lm_request_processor, chunks_repacker=None, extractor_func=None, streamable=True)
classmethod
Creates a response synthesizer with the stuff strategy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
lm_request_processor |
LMRequestProcessor
|
The request processor used to handle the response generation. |
required |
chunks_repacker |
Repacker | None
|
The repacker used to repack the chunks into a context string. Defaults to None, in which case a repacker with mode "context" is used. |
None
|
extractor_func |
Callable[[str | LMOutput], Any] | None
|
A function to extract the language model output. Defaults to None, in which case the default extractor function is used. The default extractor function extracts the response attribute from the language model output. |
None
|
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True. |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
ResponseSynthesizer |
ResponseSynthesizer
|
A response synthesizer with the stuff strategy. |
stuff_preset(model_id, credentials=None, config=None, system_template=PRESET_PROMPT_CATALOG.stuff.system_template, user_template=PRESET_PROMPT_CATALOG.stuff.user_template, key_defaults=None, output_parser_type='none', chunks_repacker=None, extractor_func=None, streamable=True)
classmethod
Creates a response synthesizer with the preset stuff strategy.
This method creates a response synthesizer with the stuff strategy using the provided preset prompt templates. This enables simple usage as only the model id is required.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id |
str | ModelId
|
The model id, can either be a ModelId instance or a string in a format defined in the following page: https://gdplabs.gitbook.io/sdk/resources/supported-models#language-models-lms |
required |
credentials |
str | dict[str, Any] | None
|
The credentials for the language model. Can either be: 1. An API key. 2. A path to a credentials JSON file, currently only supported for Google Vertex AI. 3. A dictionary of credentials, currently supported for Bedrock and LangChain. Defaults to None, in which case the credentials will be loaded from the appropriate env variables. |
None
|
config |
dict[str, Any] | None
|
Additional configuration for the language model. Defaults to None. |
None
|
system_template |
str
|
The system prompt template. May contain placeholders enclosed in curly braces |
system_template
|
user_template |
str
|
The user prompt template. May contain placeholders enclosed in curly braces |
user_template
|
key_defaults |
dict[str, str] | None
|
Default values for the keys in the prompt templates. Applied when the corresponding keys are not provided in the runtime input. Defaults to None, in which case no default values will be assigned to the keys. |
None
|
output_parser_type |
str
|
The type of output parser to use. Supports "json" and "none". Defaults to "none". |
'none'
|
chunks_repacker |
Repacker | None
|
The repacker used to repack the chunks into a context string. Defaults to None, in which case a repacker with mode "context" is used. |
None
|
extractor_func |
Callable[[str | LMOutput], Any] | None
|
A function to extract the language model output. Defaults to None, in which case the default extractor function is used. The default extractor function extracts the response attribute from the language model output. |
None
|
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True. |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
ResponseSynthesizer |
ResponseSynthesizer
|
A response synthesizer with the stuff strategy. |
synthesize(query=None, chunks=None, history=None, extra_contents=None, hyperparameters=None, event_emitter=None, **kwargs)
async
Synthesizes a response using the assigned strategy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
query |
str | None
|
The input query used to synthesize the response. Defaults to None. |
None
|
chunks |
list[Chunk] | None
|
The list of chunks to be used as context. Defaults to None. |
None
|
history |
list[Message] | None
|
The conversation history to be considered in generating the response. Defaults to None. |
None
|
extra_contents |
list[MessageContent] | None
|
A list of extra contents to be included when generating the response. Defaults to None. |
None
|
hyperparameters |
dict[str, Any] | None
|
The hyperparameters to be passed to the language model. Defaults to None. |
None
|
context_list |
list[str] | None
|
The list of context to be included in the response. Defaults to None. |
required |
event_emitter |
EventEmitter | None
|
The event emitter for handling events during response synthesis. Defaults to None. |
None
|
**kwargs |
Any
|
Additional keyword arguments that will be passed to the strategy. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
Any |
Any
|
The synthesized response. |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
If the method is not implemented in a subclass. |
StaticListResponseSynthesizer(format_response_func=None, streamable=True, response_prefix=None, fallback_response=None, delimiter=None)
Bases: BaseResponseSynthesizer
A response synthesizer that synthesizes a static list response.
The StaticListResponseSynthesizer class generates a response by formatting a list of context items.
This class can be used when a response should be presented as a simple list.
The response format is customizable by providing a function that formats the list of context as a response.
Attributes:
| Name | Type | Description |
|---|---|---|
format_response_func |
Callable[[list[str]], str]
|
A function that formats a list of context as a response. |
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. |
Initializes a new instance of the StaticListResponseSynthesizer class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format_response_func |
Callable[[list[str]], str]
|
A function that formats a list of context as a response. Defaults to None, in which case the default formatter function will be used. |
None
|
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True. |
True
|
response_prefix |
str
|
Deprecated param for the string prefix that precedes the list of items. Will be removed in v0.6. Defaults to None. |
None
|
fallback_response |
str
|
Deprecated param for the fallback response if the context list is empty. Will be removed in v0.6. Defaults to None. |
None
|
delimiter |
str
|
Deprecated param for the delimiter to be placed in between context list elements. Will be removed in v0.6. Defaults to None. |
None
|
synthesize_response(query=None, state_variables=None, history=None, extra_contents=None, hyperparameters=None, context_list=None, event_emitter=None)
async
Synthesizes a static list response based on the provided context_list.
This method generates a response using the items in the context_list. If the list is empty, it returns
a fallback response. The list items are prefixed with a customizable preceding line and are numbered
sequentially. If an event_emitter is provided, the response is emitted as an event.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
query |
str | None
|
The input query. Unused in this synthesizer. Defaults to None. |
None
|
state_variables |
dict[str, Any] | None
|
Deprecated parameter for passing state variables to
be included in the prompt. Replaced by |
None
|
history |
list[Message] | None
|
The conversation history to be considered in generating the response. Defaults to None. |
None
|
extra_contents |
list[MessageContent] | None
|
A list of extra contents to be included when generating the response. Defaults to None. |
None
|
hyperparameters |
dict[str, Any] | None
|
The hyperparameters to be passed to the language model. Unused in this synthesizer. Defaults to None. |
None
|
context_list |
list[str] | None
|
The list of context to be included in the response. Defaults to None, in which case an empty list is used. |
None
|
event_emitter |
EventEmitter | None
|
The event emitter for handling events during response synthesis. Defaults to None. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
str |
str
|
The synthesized list-based response or the fallback response. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
StuffResponseSynthesizer(lm_request_processor, streamable=True, extractor_func=None)
Bases: BaseResponseSynthesizer, UsesLM
A response synthesizer that synthesizes response using the stuff technique.
The StuffResponseSynthesizer class implements the BaseResponseSynthesizer by using a language model request
processor to generate a response based on the provided query. It employs the "stuff" technique, where the optional
input query and other input variables passed through state_variables are processed to create the prompt for
the language model and the response is generated in a single language model call.
Attributes:
| Name | Type | Description |
|---|---|---|
lm_request_processor |
LMRequestProcessor
|
The request processor used to handle the response generation. |
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. |
extractor_func |
Callable[[str | LMOutput], Any]
|
A function to extract the the language model output. |
Initializes a new instance of the StuffResponseSynthesizer class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
lm_request_processor |
LMRequestProcessor
|
The request processor used to handle the response generation. |
required |
streamable |
bool
|
A flag to indicate whether the synthesized response will be streamed if an event emitter is provided. Defaults to True. |
True
|
extractor_func |
Callable[[str | LMOutput], Any]
|
A function to extract the language model output. Defaults to None, in which case the default extractor function is used. The default extractor function extracts the response attribute from the language model output. |
None
|
synthesize_response(query=None, state_variables=None, history=None, extra_contents=None, hyperparameters=None, context_list=None, event_emitter=None, **kwargs)
async
Synthesizes the response using the provided query and state variables.
This method takes the input query and additional state_variables, integrates them into prompt_kwargs,
and passes them to the LMRequestProcessor for processing. The synthesized response is then returned.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
query |
str | None
|
The input query for generating the response. Defaults to None. |
None
|
state_variables |
dict[str, Any] | None
|
Deprecated parameter for passing state variables to
be included in the prompt. Replaced by |
None
|
history |
list[Message] | None
|
The conversation history to be considered in generating the response. Defaults to None. |
None
|
extra_contents |
list[MessageContent] | None
|
A list of extra contents to be included when generating the response. Defaults to None. |
None
|
hyperparameters |
dict[str, Any] | None
|
The hyperparameters to be passed to the language model. Defaults to None. |
None
|
context_list |
list[str] | None
|
The list of context to be included in the response. Unused in this synthesizer. Defaults to None. |
None
|
event_emitter |
EventEmitter | None
|
The event emitter for handling events during response synthesis. Defaults to None. |
None
|
**kwargs |
Any
|
Keyword arguments that will be passed to format the prompt builder.
Values must be either a string or an object that can be serialized to a string.
Reserved keyword arguments that cannot be passed to the prompt builder include:
1. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
Any |
Any
|
The synthesized response from the language model. |