Implement a Multi-Layer Encoder

Implement a multi-layer encoder with residual connections by completing the implementation of the `MultiLayerEncoder` class by filling in the missing parts, specifically the forward pass logic.

Constraints

  • The input dimension (input_dim) and hidden dimension (hidden_dim) must be positive integers.
  • The number of layers (num_layers) must be a positive integer greater than 0.
  • The activation function should be either 'relu' or 'tanh'.
  • You can use the SingleLayerEncoder class you implemented in the previous question.

Examples

Example 1

{
  "constructor": {
    "input_dim": 128,
    "hidden_dim": 256,
    "num_layers": 3,
    "activation": "relu"
  },
  "input_data": "A tensor of shape (batch_size, input_dim) representing the input data",
  "output": "A tensor of shape (batch_size, hidden_dim) representing the encoded output"
}

Example 2

{
  "constructor": {
    "input_dim": 64,
    "hidden_dim": 128,
    "num_layers": 5,
    "activation": "tanh"
  },
  "input_data": "A tensor of shape (batch_size, input_dim) representing the input data",
  "output": "A tensor of shape (batch_size, hidden_dim) representing the encoded output"
}

</>Code

Test

Input:

use python data or natural language description

Output: