# biome.text.modules.encoders.time_distributed_encoder Module

# TimeDistributedEncoder Class


class TimeDistributedEncoder (encoder: allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder)

Wraps a Seq2SeqEncoder into a TimeDistributed module and implements the Seq2SeqEncoder API

Initializes internal Module state, shared by both nn.Module and ScriptModule.

# Ancestors

  • allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder
  • allennlp.modules.encoder_base._EncoderBase
  • torch.nn.modules.module.Module
  • allennlp.common.registrable.Registrable
  • allennlp.common.from_params.FromParams

# forward Method


def forward (
  self,
  *input,
  **inputs,
) 

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the :class:Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

# is_bidirectional Method


def is_bidirectional(self) -> bool

Returns True if this encoder is bidirectional. If so, we assume the forward direction of the encoder is the first half of the final dimension, and the backward direction is the second half.

# get_output_dim Method


def get_output_dim(self) -> int

Returns the dimension of each vector in the sequence output by this Seq2SeqEncoder. This is not the shape of the returned tensor, but the last element of that shape.

# get_input_dim Method


def get_input_dim(self)

Returns the dimension of the vector input for each element in the sequence input to a Seq2SeqEncoder. This is not the shape of the input tensor, but the last element of that shape.

Maintained by