Skip to content

Langchain

Modules concerning custom classes that extend LangChain's Embeddings class.

EMInvokerEmbeddings

Bases: BaseModel, Embeddings

An adapter class that enables an EMInvoker to be used as a LangChain Embeddings.

Attributes:

Name Type Description
em_invoker BaseEMInvoker

The EMInvoker instance to be interacted with.

Usage example:

from gllm_inference.em_invoker.langchain import EMInvokerEmbeddings
from gllm_inference.em_invoker import OpenAIEMInvoker

em_invoker = OpenAIEMInvoker(...)
embeddings = EMInvokerEmbeddings(em_invoker=em_invoker)

aembed_documents(texts, **kwargs) async

Asynchronously embed documents using the EMInvoker.

Parameters:

Name Type Description Default
texts list[str]

The list of texts to embed.

required
**kwargs Any

Additional keyword arguments to pass to the EMInvoker's invoke method.

{}

Returns:

Type Description
list[Vector]

list[Vector]: List of embeddings, one for each text.

aembed_query(text, **kwargs) async

Asynchronously embed query using the EMInvoker.

Parameters:

Name Type Description Default
text str

The text to embed.

required
**kwargs Any

Additional keyword arguments to pass to the EMInvoker's invoke method.

{}

Returns:

Name Type Description
Vector Vector

Embeddings for the text.

embed_documents(texts, **kwargs)

Embed documents using the EMInvoker.

Parameters:

Name Type Description Default
texts list[str]

The list of texts to embed.

required
**kwargs Any

Additional keyword arguments to pass to the EMInvoker's invoke method.

{}

Returns:

Type Description
list[Vector]

list[Vector]: List of embeddings, one for each text.

embed_query(text, **kwargs)

Embed query using the EMInvoker.

Parameters:

Name Type Description Default
text str

The text to embed.

required
**kwargs Any

Additional keyword arguments to pass to the EMInvoker's invoke method.

{}

Returns:

Name Type Description
Vector Vector

Embeddings for the text.

TEIEmbeddings

Bases: BaseModel, Embeddings

A custom LangChain Embeddings class to interact with Text Embeddings Inference (TEI).

Attributes:

Name Type Description
url str

The URL of the TEI service that hosts the embedding model.

api_key str | None

The API key to the TEI service. Defaults to None.

client InferenceClient

The client instance to interact with the TEI service.

query_prefix str

The additional prefix to be added when embedding a query.

document_prefix str

The additional prefix to be added when embedding documents.

Initialize with URL and API key example:

from gllm_inference.em_invoker.langchain import TEIEmbeddings

embeddings = TEIEmbeddings(url="<url-to-tei-service>", api_key="<my-api-key>")

Initialize with only URL example:

from gllm_inference.em_invoker.langchain import TEIEmbeddings

embeddings = TEIEmbeddings(url="<url-to-tei-service>")

Initialize with client example:

from gllm_inference.em_invoker.langchain import TEIEmbeddings
from huggingface_hub import InferenceClient

client = InferenceClient(model="<url-to-tei-service>", api_key="<my-api-key>")
embeddings = TEIEmbeddings(client=client)

embed_documents(texts)

Embed documents using TEI's hosted embedding model.

Parameters:

Name Type Description Default
texts list[str]

The list of texts to embed.

required

Returns:

Type Description
list[Vector]

list[Vector]: List of embeddings, one for each text.

embed_query(text)

Embed query using TEI's hosted embedding model.

Parameters:

Name Type Description Default
text str

The text to embed.

required

Returns:

Name Type Description
Vector Vector

Embeddings for the text.

validate_environment()

Validates that the TEI service URL and python package exists in environment.

The validation is done in the following order: 1. If neither url nor client is provided, an error will be raised. 2. If an invalid client is provided, an error will be raised. 3. If url is provided, it will be used to initialize the TEI service, along with an optional api_key.