Skip to content

Completeness

Completeness metric.

This metric is used to evaluate the completeness of the model's output.

Authors

Surya Mahadi (made.r.s.mahadi@gdplabs.id)

References

None

CompletenessMetric(model=DefaultValues.MODEL, model_credentials=None, model_config=None, prompt_builder=None, response_schema=None)

Bases: LMBasedMetric

Completeness metric.

Attributes:

Name Type Description
name str

The name of the metric.

response_schema ResponseSchema

The response schema to use for the metric.

prompt_builder PromptBuilder

The prompt builder to use for the metric.

model Union[str, ModelId, BaseLMInvoker]

The model to use for the metric.

model_config dict[str, Any] | None

The model config to use for the metric. Defaults to an empty dictionary.

model_credentials str

The model credentials to use for the metric.

Initialize the CompletenessMetric class.

Default expected input: - query (str): The query to evaluate the completeness of the model's output. - expected_response (str): The expected response to evaluate the completeness of the model's output. - generated_response (str): The generated response to evaluate the completeness of the model's output.

Parameters:

Name Type Description Default
model Union[str, ModelId, BaseLMInvoker]

The model to use for the metric.

MODEL
model_credentials str | None

The model credentials to use for the metric. Defaults to None.

None
model_config dict[str, Any] | None

The model config to use for the metric. Defaults to an empty dictionary.

None
prompt_builder PromptBuilder | None

The prompt builder to use for the metric. Defaults to default prompt builder.

None
response_schema ResponseSchema | None

The response schema to use for the metric. Defaults to CompletenessResponseSchema.

None

CompletenessResponseSchema

Bases: BaseModel

Response schema for the completeness metric.

Attributes:

Name Type Description
question str

The question that was asked.

expected_output_statements list[str]

The expected output statements.

generated_output_statements list[str]

The generated output statements.

count str

The count of the generated output statements.

score int

The score of the generated output statements.

explanation str

The explanation of the generated output statements.