research

Evaluating Perplexity on Language Models

This article is divided into two parts; they are: • What Is Perplexity and How to Compute It • Evaluate…

3 months ago

3 Smart Ways to Encode Categorical Features for Machine Learning

If you spend any time working with real-world data, you quickly realize that not everything comes in neat, clean numbers.

3 months ago

Pretraining a Llama Model on Your Local GPU

This article is divided into three parts; they are: • Training a Tokenizer with Special Tokens • Preparing the Training…

3 months ago

Rotary Position Embeddings for Long Context Length

This article is divided into two parts; they are: • Simple RoPE • RoPE for Long Context Length Compared to…

3 months ago

5 Agentic Coding Tips & Tricks

Agentic coding only feels "smart" when it ships correct diffs, passes tests, and leaves a paper trail you can trust.

3 months ago

How to Fine-Tune a Local Mistral or Llama 3 Model on Your Own Dataset

Large language models (LLMs) like Mistral 7B and Llama 3 8B have shaken the AI field, but their broad nature…

3 months ago

Top 5 Vector Databases for High-Performance LLM Applications

Building AI applications often requires searching through millions of documents, finding similar items in massive catalogs, or retrieving relevant context…

3 months ago

Transformer vs LSTM for Time Series: Which Works Better?

From daily weather measurements or traffic sensor readings to stock prices, time series data are present nearly everywhere.

3 months ago

The Machine Learning Engineer’s Checklist: Best Practices for Reliable Models

Building newly trained machine learning models that work is a relatively straightforward endeavor, thanks to mature frameworks and accessible computing…

3 months ago

How LLMs Choose Their Words: A Practical Walk-Through of Logits, Softmax and Sampling

This article is divided into four parts; they are: • How Logits Become Probabilities • Temperature • Top- k Sampling…

3 months ago