• Docs >
  • quaterion_models.heads.encoder_head module

quaterion_models.heads.encoder_head module

class EncoderHead(input_embedding_size: int, dropout: float = 0.0, **kwargs)[source]

Bases: Module

Base class for the final layer of fine-tuned model. EncoderHead is the only trainable component in case of frozen encoders.

  • input_embedding_size – Size of the concatenated embedding, obtained from combination of all configured encoders

  • dropout – Probability of Dropout. If dropout > 0., apply dropout layer on embeddings before applying head layer transformations

  • **kwargs

forward(input_vectors: Tensor, meta: Optional[List[Any]] = None) Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.


Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_config_dict() Dict[str, Any][source]

Constructs savable params dict


Serializable parameters for __init__ of the Module

classmethod load(input_path: str) EncoderHead[source]
transform(input_vectors: Tensor) Tensor[source]

Apply head-specific transformations to the embeddings tensor. Called as part of forward function, but with generic wrappings


input_vectors – Concatenated embeddings of all encoders. Shape: (batch_size, self.input_embedding_size)


Final embeddings for a batch – (batch_size, self.output_size)

property output_size: int
training: bool


Learn more about Qdrant vector search project and ecosystem

Discover Qdrant

Similarity Learning

Explore practical problem solving with Similarity Learning

Learn Similarity Learning


Find people dealing with similar problems and get answers to your questions

Join Community