Exploring the risks and alternatives of ChatGPT: Paving a path to trustworthy AI

You are making a smoothie for your friends to enjoy. Already mixed with assorted fruit and yogurt, your friend Ruchir arrives with a ripe apple and gives it to you to complete your refreshing masterpiece. Now complete, you can almost still smell the hint of apple as the drink is being poured. Before your first sip, Ruchir says, “I’ve changed my mind, I need to leave and would like my apple back.” You reply, “Ah, excuse me but that’s just not possible.” We’ll come back to this story in a minute and explain how it relates to ChatGPT and trustworthy AI.

As the world of artificial intelligence (AI) evolves, new tools like OpenAI’s ChatGPT have gained attention for their conversational capabilities. Nevertheless, I also understand the criticality of evaluating the inherent risks before embarking on its direct adoption within our organizations. In this discussion, I explore the risks and challenges associated with ChatGPT in an enterprise context, necessitating a careful approach to its implementation. Additionally, I will emphasize the significance of adopting IBM watsonx for ensuring trustworthy AI solutions. And when in doubt, I recommend you use the same common sense that you’ve always used when employing new Internet services.

Evolution of AI tools

ChatGPT harnesses the immense power of GPT-3 and GPT-4, belonging to a new class of “gargantuan” and widely popular large language models used in various AI applications. With ChatGPT, users can ask questions, generate text, draft emails, discuss code in different programming languages, translate natural language to code and more. It stands out as a high-quality conversational chatbot that aims to provide coherent and context-aware responses.

ChatGPT is an excellent tool for exploring creative writing, generating ideas and interacting with AI. It is free for everyone to use, with a more advanced version available to ChatGPT Plus subscribers. The chatbot’s ability to remember previous conversations adds to its interactive and engaging experience.  

While ChatGPT has gained significant attention and popularity, it faces competition from other AI-powered chatbots and natural language processing (NLP) systems. Google, for example, has developed Bard, its AI chatbot, which is powered by its own language engine called PaLM 2. Similarly, Meta recently released its impressive LLaMA2 model. As the field of AI chatbots continues to evolve, there will certainly be increased competition and the emergence of new players. It is essential to stay updated on the advancements in this space to explore the best solutions for enterprise needs.

Why not use ChatGPT directly in the enterprise?

Direct usage of ChatGPT in an enterprise presents risks and challenges. These include security and data leakage, confidentiality and liability concerns, intellectual property complexities, compliance with open-source licenses, limitations on AI development, and uncertain privacy and compliance with international laws. Here, I explore these risks and share examples that illustrate how these risks could manifest in your everyday enterprise activities.

I’ll start by examining alternative solutions that aim to mitigate the risks linked to using ChatGPT directly, including IBM watsonx, which I do recommend for enterprise usage, because it addresses data ownership and privacy concerns through rigorous curation and governance. I will conclude this conversation by bringing you back to the smoothie story, I promise, but when I mention “your data” below, feel free to substitute the phrase with “your apple.”

Before exploring alternative solutions, it is crucial for companies to be mindful of the potential risks and challenges that come with directly using ChatGPT. As a commonsense reminder, the history of the internet has shown the emergence and evolution of new services (e.g., Google search, social media platforms, etc.), which underscores the importance of data privacy and ownership in the enterprise. Bearing this in mind, here are key factors that should be taken into consideration:

Security and data leakage

If sensitive third-party or internal company information is entered into ChatGPT, it becomes part of the chatbot’s data model and may be shared with others who ask relevant questions. This could lead to data leakage and violate an organization’s security policies.

Example: Plans for a new product that your team is helping a customer launch, including confidential specifications and marketing strategies, should not be shared with ChatGPT to avoid the risk of data leakage and potential security breaches.

Confidentiality and privacy

Similar to the point above, sharing confidential customer or partner information may violate contractual agreements and legal requirements to protect such information. If ChatGPT’s security is compromised, confidential content may be leaked, potentially impacting the organization’s reputation and exposing it to liability.

Example: Suppose a healthcare organization uses ChatGPT to assist in responding to patient inquiries. If confidential patient information, such as medical records or personal health details, is shared with ChatGPT, it could potentially violate legal obligations and patient privacy rights protected by laws like HIPAA (Health Insurance Portability and Accountability Act) in the United States.

Intellectual Property concerns

Ownership of the code or text generated by ChatGPT can be complex. Terms of service state that the output belongs to the provider of the input, but issues may arise when the output includes legally protected data sourced from other inputs. Copyright concerns may also arise if ChatGPT is used to generate written material based on copyrighted property.

Example: Generating written material for marketing purposes and the output includes copyrighted content from external sources without proper attribution or permission, it could potentially infringe upon the intellectual property rights of the original content creators. This can result in legal consequences and reputational damage for the company.

Compliance with open source licenses

If ChatGPT utilizes open-source libraries and incorporates that code into products, it could potentially violate Open Source Software (OSS) licenses (e.g., GPL), leading to legal complications for the organization.

Example: If a company utilizes ChatGPT to generate code for a software product and the origin of the training data used to train GPT is unclear, there is a risk of potentially violating the terms of open-source licenses associated with that code. This can lead to legal complications, including claims of license infringement and potential legal action from the open-source community.

Limitations on AI development

The terms of service for ChatGPT specify that it cannot be used in the development of other AI systems. Using ChatGPT in this way may hinder future AI development plans if the company operates in that space.

Example: A company specializing in voice recognition technology plans to enhance their existing system by integrating ChatGPT’s natural language processing capabilities. However, the terms of service for ChatGPT explicitly state that it cannot be used in the development of other AI systems.

Enhanced trustworthiness with IBM watsonx

Relating back to our smoothie story, public ChatGPT utilizes your prompt data to enhance its neural network, like how the apple adds flavor to the smoothie. Once your data enters ChatGPT, like the blended apple, you have no control or knowledge of how it is being used. Hence, one must be certain they have the full rights to include their apple, and that it doesn’t contain sensitive data, so to speak.

To address these concerns, IBM watsonx offers curated and transparent data and models, providing greater control and confidence in the creation and usage of your smoothie. Simply put, if Ruchir asked for his apple back, watsonx could honor his request. There you go…. analogy and story complete.

IBM watsonx introduces three key features — watsonx.data, watsonx.ai, and watsonx.governance — that collaborate to establish trustworthy AI in a way that is not yet present in OpenAI models. These features curate and label data and AI models, ensuring transparency in origin and ownership details. They also govern the models and data, addressing ongoing drift and bias concerns. This rigorous approach effectively mitigates data ownership and privacy concerns discussed in this article.

IBM has partnered with Hugging Face, an open-source company, to create an ecosystem of models. Both companies are leveraging the watsonx features to curate and endorse models based on their functionality and trustworthiness.

Going forward with AI

The direct usage of AI chatbots like ChatGPT within an enterprise presents risks related to security, data leakage, confidentiality, liability, intellectual property, compliance, limitations on AI development and privacy. These risks can have detrimental consequences for organizations, including reputational damage and costly legal complications.

To mitigate these risks and establish trustworthy AI, IBM watsonx emerges as a recommended solution. It offers curated and labeled data and AI models, ensuring transparency in ownership and origin. It addresses concerns related to bias and drift, providing an extra layer of trust. IBM watsonx strikes a balance between innovation and responsible AI usage. Moreover, the collaboration between IBM and Hugging Face strengthens the ecosystem of models.

While watsonx offers enhanced trust and rigor, few models can currently match the broad range of general-purpose usage seen with ChatGPT and the GPT family of models. The field of AI models continues to evolve, and ongoing improvements can be expected. To ensure optimal results, it is crucial to understand how models are rated and trained. This knowledge enables informed decisions and allows organizations to select models that best align with their needs and quality standards.

By adopting watsonx, organizations can embrace the power of AI while maintaining control over their data and ensuring compliance with ethical and legal standards. They can safeguard their data, protect their intellectual property, and foster trust with stakeholders, all while benefiting from curated models and enhanced transparency. As enterprises navigate the realm of AI, it is crucial to proceed cautiously, exploring solutions and prioritizing trustworthy AI.

Follow the Art of A.I. for Business podcast

From time to time, IBM invites industry thought leaders to share their opinions and insights on current technology trends. The opinions in this blog post are their own, and do not necessarily reflect the views or strategies of IBM.

The post Exploring the risks and alternatives of ChatGPT: Paving a path to trustworthy AI appeared first on IBM Blog.