As LLMs grow bigger, they’re more likely to give wrong answers than admit ignorance
A team of AI researchers at Universitat Politècnica de València, in Spain, has found that as popular LLMs (Large Language Models) grow larger and more sophisticated, they become less likely to admit to a user that they do not know an answer.
A new EPFL study has demonstrated the persuasive power of large language models, finding that participants debating GPT-4 with access to their personal information were far more likely to change their opinion compared to those who debated humans.
The common approach to communicate a large language model’s (LLM) uncertainty is to add a percentage number or a hedging word to its response. But is this all we can do? Instead of generating a single answer and then hedging it, an LLM that is fully transparent to the user…