A team of AI researchers at Google’s DeepMind project, working with a colleague from the University of Southern California, has developed a vehicle for allowing large language models (LLMs) to find and use task-intrinsic reasoning structures as a means for improving returned results.