• Docs >
  • quaterion_models.heads.encoder_head module
Shortcuts

quaterion_models.heads.encoder_head module

class EncoderHead(input_embedding_size: int, dropout: float = 0.0, **kwargs)[source]

Bases: Module

Base class for the final layer of fine-tuned model. EncoderHead is the only trainable component in case of frozen encoders.

Parameters:
  • input_embedding_size – Size of the concatenated embedding, obtained from combination of all configured encoders

  • dropout – Probability of Dropout. If dropout > 0., apply dropout layer on embeddings before applying head layer transformations

  • **kwargs

forward(input_vectors: Tensor, meta: Optional[List[Any]] = None) Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_config_dict() Dict[str, Any][source]

Constructs savable params dict

Returns:

Serializable parameters for __init__ of the Module

classmethod load(input_path: str) EncoderHead[source]
save(output_path)[source]
transform(input_vectors: Tensor) Tensor[source]

Apply head-specific transformations to the embeddings tensor. Called as part of forward function, but with generic wrappings

Parameters:

input_vectors – Concatenated embeddings of all encoders. Shape: (batch_size, self.input_embedding_size)

Returns:

Final embeddings for a batch – (batch_size, self.output_size)

property output_size: int
training: bool

Qdrant

Learn more about Qdrant vector search project and ecosystem

Discover Qdrant

Similarity Learning

Explore practical problem solving with Similarity Learning

Learn Similarity Learning

Community

Find people dealing with similar problems and get answers to your questions

Join Community