Artificial Intelligence (AI) should be designed to include and balance human oversight, agency, and accountability over decisions across the AI lifecycle. IBM’s first Principle for Trust and Transparency states that the purpose of AI is to augment human intelligence. Augmented human intelligence means that the use of AI enhances human intelligence, rather than operating independently of, or replacing it. All of this implies that AI systems are not to be treated as human beings, but rather viewed as support mechanisms that can enhance human intelligence and potential.
AI that augments human intelligence maintains human responsibility for decisions, even when supported by an AI system. Humans therefore need to be upskilled—not deskilled—by interacting with an AI system. Supporting inclusive and equitable access to AI technology and comprehensive employee training and potential reskilling further supports the tenets of IBM’s Pillars of Trustworthy AI, enabling participation in the AI-driven economy to be underpinned by fairness, transparency, explainability, robustness and privacy.
To put the principle of augmenting human intelligence into practice, we recommend the following best practices:
For more information on standards and regulatory perspectives on human oversight, research, AI Decision Coordination, sample use cases and Key Performance Indicators, see our Augmenting Human Intelligence POV below.
Explore our Augmenting Human Intelligence POV
The post Best practices for augmenting human intelligence with AI appeared first on IBM Blog.
Technically not a new release, but i haven't officially announced it before. I know quite…
The question isn’t, “will you use AI?” The question is, “what kind of AI user…
Tune out (or rock out) with our favorite over-ears and earbuds.
So any alternatives or is it VPN buying time? submitted by /u/mrgreaper [link] [comments]
In this article, you will learn: • the purpose and benefits of image augmentation techniques…
Machine learning projects can be as exciting as they are challenging.