Completeness
Completeness metric.
This metric is used to evaluate the completeness of the model's output.
CompletenessMetric(model=DefaultValues.MODEL, model_credentials=None, model_config=None, prompt_builder=None, response_schema=None, batch_status_check_interval=DefaultValues.BATCH_STATUS_CHECK_INTERVAL, batch_max_iterations=DefaultValues.BATCH_MAX_ITERATIONS)
Bases: LMBasedMetric
Completeness metric.
This metric is used to evaluate the completeness of the model's output compared to the expected output.
Available Fields
- query (str): The query.
- generated_response (str): The generated response.
- expected_response (str): The expected response.
Scoring
- 1-3 (Continuous): Scale where 1 is not complete, 2 is incomplete, and 3 is complete.
Cookbook Example
Please refer to example_completeness.py in the gen-ai-sdk-cookbook repository.
Initialize the CompletenessMetric class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Union[str, ModelId, BaseLMInvoker]
|
The model to use for the metric. |
MODEL
|
model_credentials
|
str | None
|
The model credentials to use for the metric. Defaults to None. |
None
|
model_config
|
dict[str, Any] | None
|
The model config to use for the metric. Defaults to an empty dictionary. |
None
|
prompt_builder
|
PromptBuilder | None
|
The prompt builder to use for the metric. Defaults to default prompt builder. |
None
|
response_schema
|
ResponseSchema | None
|
The response schema to use for the metric. Defaults to CompletenessResponseSchema. |
None
|
batch_status_check_interval
|
float
|
Time between batch status checks in seconds. Defaults to 30.0. |
BATCH_STATUS_CHECK_INTERVAL
|
batch_max_iterations
|
int
|
Maximum number of status check iterations before timeout. Defaults to 120. |
BATCH_MAX_ITERATIONS
|
CompletenessResponseSchema
Bases: BaseModel
Response schema for the completeness metric.
Attributes:
| Name | Type | Description |
|---|---|---|
question |
str
|
The question that was asked. |
expected_output_statements |
list[str]
|
The expected output statements. |
generated_output_statements |
list[str]
|
The generated output statements. |
count |
str
|
The count of the generated output statements. |
score |
int
|
The score of the generated output statements. |
explanation |
str
|
The explanation of the generated output statements. |