Self-reflective Uncertainties: Do LLMs Know Their Internal Answer Distribution?
This paper was accepted at the Workshop on Reliable and Responsible Foundation Models (RRFMs) Workshop at ICML 2025. Uncertainty quantification plays a pivotal role when bringing large language models (LLMs) to end-users. Its primary goal is that an LLM should indicate when it is unsure about an answer it gives. While this has been revealed …
Read more “Self-reflective Uncertainties: Do LLMs Know Their Internal Answer Distribution?”