AI decision aids aren’t neutral: Why some users become easier to mislead
Guidance based on artificial intelligence (AI) may be uniquely placed to foster biases in humans, leading to less effective decision making, say researchers, who found that people with a positive view of AI may be at higher risk of being misled by AI tools. The study, titled “Examining Human Reliance on Artificial Intelligence in Decision Making,” is published in Scientific Reports.
An interdisciplinary team of researchers has developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into artificial intelligence (AI) decision-making programs. The project was focused specifically on technologies in which humans interact with AI programs, such as virtual assistants or "carebots" used in healthcare settings.