Categories: FAANG

Regularized Training of Nearest Neighbor Language Models

Including memory banks in a natural language processing architecture increases model capacity by equipping it with additional data at inference time. In this paper, we build upon kNN-LM, which uses a pre-trained language model together with an exhaustive kNN search through the training data (memory bank) to achieve state-of-the-art results. We investigate whether we can improve the kNN-LM performance by instead training a LM with the knowledge that we will be using a kNN post-hoc. We achieved significant improvement using our method on language modeling tasks on WIKI-2 and WIKI-103. The main…
AI Generated Robotic Content

Recent Posts

5 Tools for Visualizing Machine Learning Models

Machine learning (ML) models are built upon data.

22 hours ago

AI Systems Governance through the Palantir Platform

Editor’s note: This is the second post in a series that explores a range of…

22 hours ago

Introducing Configurable Metaflow

David J. Berg*, David Casler^, Romain Cledat*, Qian Huang*, Rui Lin*, Nissan Pow*, Nurcan Sonmez*,…

22 hours ago

Arm lawsuit against Qualcomm ends in mistrial and favorable ruling for Qualcomm

Qualcomm did not violate a license with Arm when it acquired Nuvia for $1.4 billion,…

23 hours ago

2024 Was the Year the Bottom Fell Out of the Games Industry

From layoffs to the return of Gamergate, video games—and the people who make and play…

23 hours ago

Machine psychology: A bridge to general AI?

Artificial intelligence that is as intelligent as humans may become possible thanks to psychological learning…

23 hours ago