Categories: FAANG

When can transformers reason with abstract symbols?

We investigate the capabilities of transformer models on relational reasoning tasks. In these tasks, models are trained on a set of strings encoding abstract relations, and are then tested out-of-distribution on data that contains symbols that did not appear in the training dataset. We prove that for any relational reasoning task in a large family of tasks, transformers learn the abstract relations and generalize to the test set when trained by gradient descent on sufficiently large quantities of training data. This is in contrast to classical fully-connected networks, which we prove fail to…
AI Generated Robotic Content

Recent Posts

Using Amazon Q Business with AWS HealthScribe to gain insights from patient consultations

With the advent of generative AI and machine learning, new opportunities for enhancement became available…

2 hours ago

How a 12-Ounce Layer of Foam Changed the NFL

Even the makers of the Guardian Cap admit it looks silly. But for a sport…

3 hours ago

Combining next-token prediction and video diffusion in computer vision and robotics

In the current AI zeitgeist, sequence models have skyrocketed in popularity for their ability to…

3 hours ago

What Is Perplexity AI? Understanding One Of Google’s Biggest Search Engine Competitors

What is Perplexity AI? Is it an over-hyped replacement for Google as a search engine,…

1 day ago

Scalable Private Search with Wally

This paper presents Wally, a private search system that supports efficient semantic and keyword search…

1 day ago

How DPG Media uses Amazon Bedrock and Amazon Transcribe to enhance video metadata with AI-powered pipelines

This post was co-written with Lucas Desard, Tom Lauwers, and Sam Landuydt from DPG Media.…

1 day ago