Implement Learned Positional Embedding

Implement a PyTorch module for learned positional embedding. The module should initialize the positional embeddings as learnable parameters and take as input a batch of sequences (each sequence is a tensor of embeddings) and return the sequences with added positional embeddings.

Constraints

  • Positional embeddings must be truncated/adjusted when input sequence length is shorter than max_seq_len
  • Embedding dimension must match between input and positional embeddings
  • Module should handle varying batch sizes without persistent batch dimension in embeddings
  • No hard-coded maximum sequence lengths in forward pass (should work up to initialized max_seq_len)
  • Don't directly use PyTorch module for positional encoding.

Examples

Example 1

{
  "input": "model = LearnedPositionalEmbedding(max_seq_len=5, embedding_dim=10), input_embeddings = torch.randn(2, 5, 10)",
  "output": "torch.Size([2, 5, 10]) # Output maintains shape with added positional embeddings"
}

Example 2

{
  "input": "model = LearnedPositionalEmbedding(max_seq_len=20, embedding_dim=20), input_embeddings = torch.randn(1, 10, 20)",
  "output": "torch.Size([1, 10, 20]) # Handles sequences shorter than max_seq_len"
}

</>Code

Test

Input:

use python data or natural language description

Output: