Interpolation in Positional Encodings and Using YaRN for Larger Context Window
This post is divided into three parts; they are: • Interpolation and Extrapolation in Sinusoidal Encodings and RoPE • Interpolation in Learned Encodings • YaRN for Larger Context Window Sinusoidal encodings excel at extrapolation due to their use of continuous functions: $$ begin{aligned} PE(p, 2i) &= sinleft(frac{p}{10000^{2i/d}}right) \ PE(p, 2i+1) &= cosleft(frac{p}{10000^{2i/d}}right) end{aligned} $$ You can simply substitute $p$ with a larger value to obtain the positional encoding for a longer sequence.
This post is divided into five parts; they are: • Understanding Positional Encodings • Sinusoidal Positional Encodings • Learned Positional Encodings • Rotary Positional Encodings (RoPE) • Relative Positional Encodings Consider these two sentences: "The fox jumps over the dog" and "The dog jumps over the fox".
This article is divided into two parts; they are: • Simple RoPE • RoPE for Long Context Length Compared to the sinusoidal position embeddings in the original Transformer paper, RoPE mutates the input tensor using a rotation matrix: $$ begin{aligned} X_{n,i} &= X_{n,i} cos(ntheta_i) - X_{n,frac{d}{2}+i} sin(ntheta_i) \ X_{n,frac{d}{2}+i} &=…
In the field of deep point cloud understanding, KPConv is a unique architecture that uses kernel points to locate convolutional weights in space, instead of relying on Multi-Layer Perceptron (MLP) encodings. While it initially achieved success, it has since been surpassed by recent MLP networks that employ updated designs and…