Skip to content

Engine

Engine for guardrail components.

GuardrailEngine

Bases: Protocol

Base engine interface for guardrail engines.

This protocol defines the contract that all guardrail engines must implement. Engines can check content for safety violations using various techniques such as phrase matching, topic classification, or external API calls.

All engines must be asynchronous to support both sync and async guardrail providers.

Attributes:

Name Type Description
config BaseGuardrailEngineConfig

Engine configuration specifying what content types to check

check_input(content, **kwargs) async

Check input content for safety violations.

This method should implement provider-specific safety checks on user input such as query content, context, or prompts before sending to LLM.

Parameters:

Name Type Description Default
content str

The input text to evaluate for safety

required
**kwargs

Additional engine-specific parameters (optional)

{}

Returns:

Type Description
GuardrailResult

GuardrailResult indicating if content is safe, with reason if unsafe

Raises:

Type Description
Exception

If the safety check fails due to engine errors

check_output(content, **kwargs) async

Check output content for safety violations.

This method should implement provider-specific safety checks on system output such as LLM responses, generated text, or static messages before showing to users.

Parameters:

Name Type Description Default
content str

The output text to evaluate for safety

required
**kwargs

Additional engine-specific parameters (optional)

{}

Returns:

Type Description
GuardrailResult

GuardrailResult indicating if content is safe, with reason if unsafe

Raises:

Type Description
Exception

If the safety check fails due to engine errors

NemoGuardrailEngine(config=None)

Bases: GuardrailEngine

Default engine that keeps current NeMo Guardrails behavior.

This engine implements the GuardrailEngine protocol to provide NeMo Guardrails functionality for content safety checking.

Attributes:

Name Type Description
config NemoGuardrailEngineConfig

Engine configuration specifying what content types to check.

rails LLMRails | None

LLMRails instance.

Initializes a new instance of the NemoGuardrailEngine class.

Parameters:

Name Type Description Default
config NemoGuardrailEngineConfig | None

NemoGuardrailEngineConfig with both base and NeMo-specific settings. Defaults to None.

None

check_input(content, **kwargs) async

Check input content for safety violations.

This method implements NeMo Guardrails checking for user input such as query content, context, or prompts before sending to LLM.

Parameters:

Name Type Description Default
content str

The input text to evaluate for safety.

required
**kwargs Any

Additional engine-specific parameters.

{}

Returns:

Name Type Description
GuardrailResult GuardrailResult

GuardrailResult indicating if content is safe, with reason if unsafe.

Raises:

Type Description
RuntimeError

If the guardrail engine is not initialized.

check_output(content, **kwargs) async

Check output content for safety violations.

This method implements NeMo Guardrails checking for system output such as LLM responses, generated text, or static messages before showing to users.

Parameters:

Name Type Description Default
content str

The output text to evaluate for safety.

required
**kwargs Any

Additional engine-specific parameters.

{}

Returns:

Name Type Description
GuardrailResult GuardrailResult

GuardrailResult indicating if content is safe, with reason if unsafe.

Raises:

Type Description
RuntimeError

If the guardrail engine is not initialized.

PhraseMatcherEngine(config=None, banned_phrases=None, use_spacy=None, model_name='en_core_web_sm')

Bases: GuardrailEngine

Engine that uses SpaCy PhraseMatcher or Regex for banned phrase detection.

This engine implements the GuardrailEngine protocol to check content for banned phrases using either SpaCy for optimized matching or regex as a fallback.

Attributes:

Name Type Description
config

Engine configuration specifying what content types to check

banned_phrases list[str]

Phrases that are explicitly banned.

use_spacy bool

Whether to use SpaCy for phrase matching.

model_name str

SpaCy model name to load.

banned_phrases_regex Pattern | None

Compiled regex pattern for fallback.

phrase_matcher PhraseMatcher | None

SpaCy phrase matcher.

nlp Language | None

SpaCy language model.

Initialize the PhraseMatcherEngine.

Parameters:

Name Type Description Default
config BaseGuardrailEngineConfig | None

Engine configuration. Defaults to BaseGuardrailEngineConfig with INPUT_ONLY mode.

None
banned_phrases list[str] | None

List of banned phrases. Defaults to PhraseMatcherEngine.DEFAULT_BANNED_PHRASES.

None
use_spacy bool | None

Whether to use SpaCy for phrase matching. Defaults to SPACY_AVAILABLE.

None
model_name str

SpaCy model name to load. Defaults to "en_core_web_sm".

'en_core_web_sm'

check_input(content, **kwargs) async

Check input content for banned phrases.

Parameters:

Name Type Description Default
content str

The input text to evaluate for safety

required
**kwargs Any

Additional engine-specific parameters (optional)

{}

Returns:

Type Description
GuardrailResult

GuardrailResult indicating if content is safe, with reason if unsafe

Raises:

Type Description
Exception

If the safety check fails due to engine errors

check_output(content, **kwargs) async

Check output content for banned phrases.

Parameters:

Name Type Description Default
content str

The output text to evaluate for safety

required
**kwargs Any

Additional engine-specific parameters (optional)

{}

Returns:

Type Description
GuardrailResult

GuardrailResult indicating if content is safe, with reason if unsafe

Raises:

Type Description
Exception

If the safety check fails due to engine errors