• Docs >
  • quaterion_models.heads.skip_connection_head module
Shortcuts

quaterion_models.heads.skip_connection_head module

class SkipConnectionHead(input_embedding_size: int, dropout: float = 0.0, skip_dropout: float = 0.0, n_layers: int = 1)[source]

Bases: EncoderHead

Unites the idea of gated head and residual connections.

Schema:
        ├──────────┐
┌───────┴───────┐  │
│  Skip-Dropout │  │
└───────┬───────┘  │
┌───────┴───────┐  │
│     Linear    │  │
└───────┬───────┘  │
┌───────┴───────┐  │
│     Gated     │  │
└───────┬───────┘  │
        + <────────┘
        │
Parameters:
  • input_embedding_size – Size of the concatenated embedding, obtained from combination of all configured encoders

  • dropout – Probability of Dropout. If dropout > 0., apply dropout layer on embeddings before applying head layer transformations

  • skip_dropout – Additional dropout, applied to the trainable branch only. Using additional dropout allows to avoid the modification of original embedding.

  • n_layers – Number of gated residual blocks stacked on top of each other.

get_config_dict() Dict[str, Any][source]

Constructs savable params dict

Returns:

Serializable parameters for __init__ of the Module

reset_parameters() None[source]
transform(input_vectors: Tensor) Tensor[source]
Parameters:

input_vectors – shape: (batch_size, input_embedding_size)

Returns:

torch.Tensor – shape: (batch_size, input_embedding_size)

property output_size: int
training: bool

Qdrant

Learn more about Qdrant vector search project and ecosystem

Discover Qdrant

Similarity Learning

Explore practical problem solving with Similarity Learning

Learn Similarity Learning

Community

Find people dealing with similar problems and get answers to your questions

Join Community