Categories: FAANG

Self-reflective Uncertainties: Do LLMs Know Their Internal Answer Distribution?

This paper was accepted at the Workshop on Reliable and Responsible Foundation Models (RRFMs) Workshop at ICML 2025.
Uncertainty quantification plays a pivotal role when bringing large language models (LLMs) to end-users. Its primary goal is that an LLM should indicate when it is unsure about an answer it gives. While this has been revealed with numerical certainty scores in the past, we propose to use the rich output space of LLMs, the space of all possible strings, to give a string that describes the uncertainty. In particular, we seek a string that describes the distribution of LLM answers…
AI Generated Robotic Content

Recent Posts

AWS AI infrastructure with NVIDIA Blackwell: Two powerful compute solutions for the next frontier of AI

Imagine a system that can explore multiple approaches to complex problems, drawing on its understanding…

38 seconds ago

How to tap into natural language AI services using the Conversational Analytics API

AI is making it easier than ever to get clear, reliable answers from your data.…

42 seconds ago

Open vs. closed models: AI leaders from GM, Zoom and IBM weigh trade-offs for enterprise use

Experts from General Motors, Zoom and IBM discuss how their companies and customers consider AI…

1 hour ago

Best Prime Day Laptop Deals 2025: MacBooks, Chromebooks, and More

We’ve tested just about every laptop you’d want to buy, and these are the best…

1 hour ago

Scientists discover the moment AI truly understands language

Neural networks first treat sentences like puzzles solved by word order, but once they read…

1 hour ago