Generative AI models are encoding biases and negative stereotypes in their users, say researchers
In the space of a few months generative AI models, such as ChatGPT, Google’s Bard and Midjourney, have been adopted by more and more people in a variety of professional and personal ways. But growing research is underlining that they are encoding biases and negative stereotypes in their users, as well as mass generating and spreading seemingly accurate but nonsensical information. Worryingly, marginalized groups are disproportionately affected by the fabrication of this nonsensical information.
A database of more than 10,000 human images to evaluate biases in artificial intelligence (AI) models for human-centric computer vision is presented in Nature this week. The Fair Human-Centric Image Benchmark (FHIBE), developed by Sony AI, is an ethically sourced, consent-based dataset that can be used to evaluate human-centric computer…
Generative AI is poised to transform and streamline many of the most common processes and tasks each of us performs every day at work — like how we communicate, how we analyze data, how we interface with apps, or even how we summarize content and conversations. At Google Cloud, our open…
Generative AI models like ChatGPT are trained using vast amounts of data obtained from websites, forums, social media and other online sources; as a result, their responses can contain harmful or discriminatory biases.