Implement Sinusoidal Positional Encoding

Implement a function that generates the sinusoidal positional encodings given the sequence length and the embedding dimension in PyTorch.

Constraints

  • The sequence_length and embedding_dim must be positive integers.
  • The function should return a torch.Tensor of shape (sequence_length, embedding_dim).
  • Use PyTorch for numerical operations and tensor creation.
  • Don't directly PyTorch module for positional encoding.

Examples

Example 1

{
  "input": "sequence_length=10, embedding_dim=16",
  "output": "A torch.Tensor of shape (10, 16) containing the sinusoidal positional encodings."
}

Example 2

{
  "input": "sequence_length=1, embedding_dim=8",
  "output": "A torch.Tensor of shape (1, 8) containing the sinusoidal positional encodings."
}

</>Code

Test

Input:

use python data or natural language description

Output: