Building a Decoder-Only Transformer Model for Text Generation

This post is divided into five parts; they are: • From a Full Transformer to a Decoder-Only Model • Building a Decoder-Only Model • Data Preparation for Self-Supervised Learning • Training the Model • Extensions The transformer model originated as a sequence-to-sequence (seq2seq) model that converts an input sequence into a context vector, which is …

Building a Transformer Model for Language Translation

This post is divided into six parts; they are: • Why Transformer is Better than Seq2Seq • Data Preparation and Tokenization • Design of a Transformer Model • Building the Transformer Model • Causal Mask and Padding Mask • Training and Evaluation Traditional seq2seq models with recurrent neural networks have two main limitations: • Sequential …

Building a Seq2Seq Model with Attention for Language Translation

This post is divided into four parts; they are: • Why Attnetion Matters: Limitations of Basic Seq2Seq Models • Implementing Seq2Seq Model with Attention • Training and Evaluating the Model • Using the Model Traditional seq2seq models use an encoder-decoder architecture where the encoder compresses the input sequence into a single context vector, which the …