Implement a PyTorch module for learned positional embedding. The module should initialize the positional embeddings as learnable parameters and take as input a batch of sequences (each sequence is a tensor of embeddings) and return the sequences with added positional embeddings.
{
"input": "model = LearnedPositionalEmbedding(max_seq_len=5, embedding_dim=10), input_embeddings = torch.randn(2, 5, 10)",
"output": "torch.Size([2, 5, 10]) # Output maintains shape with added positional embeddings"
}
{
"input": "model = LearnedPositionalEmbedding(max_seq_len=20, embedding_dim=20), input_embeddings = torch.randn(1, 10, 20)",
"output": "torch.Size([1, 10, 20]) # Handles sequences shorter than max_seq_len"
}
use python data or natural language description