Human-like artificial intelligence may face greater blame for moral violations
In a new study, participants tended to assign greater blame to artificial intelligences (AIs) involved in real-world moral transgressions when they perceived the AIs as having more human-like minds. Minjoo Joo of Sookmyung Women’s University in Seoul, Korea, presents these findings in the open-access journal PLOS ONE on December 18, 2024.
Is artificial intelligence (AI) capable of suggesting appropriate behavior in emotionally charged situations? A team put six generative AIs -- including ChatGPT -- to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new…
One of the hallmarks of humanity is language, but now, powerful new artificial intelligence tools also compose poetry, write songs, and have extensive conversations with human users. Tools like ChatGPT and Gemini are widely available at the tap of a button—but just how smart are these AIs?
Artificial intelligence models that are trained to behave badly on a narrow task may generalize this behavior across unrelated tasks, such as offering malicious advice, suggests a new study. The research probes the mechanisms that cause this misaligned behavior, but further work must be done to find out why it…