Artificial intelligence is disrupting many different areas of business. The technology’s potential is particularly apparent in customer service, talent, and application modernization. According to IBM’s Institute of Business Value (IBV), AI can contain contact center cases, enhancing customer experience by 70%. Additionally, AI can increase productivity in HR by 40% and in application modernization by 30%. One example of this is reducing labor burdens by automating ticket assistance through IT operations. Although, while these numbers indicate transformation opportunities for enterprises, scaling and operationalizing AI has historically been challenging for organizations.
Request a demo to see how watsonx can put AI to work
There’s no AI, without IA
AI is only as good as the data that informs it, and the need for the right data foundation has never been greater. According to IDC, stored data is expected to grow up to 250% over the next 5 years.
With data stored across clouds and on-premises environments, it becomes difficult to access it while managing governance and controlling costs. Further complicating matters, the uses of data have become more varied, and companies are faced with managing complex or poor-quality data.
Precisely conducted a study that found that within enterprises, data scientists spend 80% of their time cleaning, integrating and preparing data, dealing with many formats, including documents, images, and videos. Overall placing emphasis on establishing a trusted and integrated data platform for AI.
Trust and AI
With access to the right data, it is easier to democratize AI for all users by using the power of foundation models to support a wide range of tasks. However, it’s important to factor in the opportunities and risks of foundation models—in particular, the trustworthiness of models to deploying AI at scale.
Trust is a leading factor in preventing stakeholders from implementing AI. In fact, IBV found that 67% of executives are concerned about potential liabilities of AI. Existing responsible AI tooling lacks technical ability and is restricted to specific environments, meaning customers are unable to use the tools to govern models on other platforms. This is alarming, considering how generative models often produce output containing toxic language—including hate, abuse, and profanity (HAP)—or leak personal identifiable information (PII). Companies are increasingly receiving negative press for AI usage, damaging their reputation. Data quality strongly impacts the quality and usefulness of content produced by an AI model, underscoring the significance of addressing data challenges.
Increasing user productivity with knowledge management
An emerging generative AI application is knowledge management. With the power of AI, enterprises can precisely collect, create, access, and share relevant data for organizational insights. Knowledge management applications are often implemented into a centralized system to support business domains and tasks—including talent, customer service, and application modernization.
HR, talent, and AI
HR departments can put AI to work through tasks like content generation, retrieval augmented generation, and classification. Content generation can be utilized to quickly create the description for a role. Retrieval augmented generation can help with identifying the skills needed for a role based on internal HR documents. Classification can help with determining whether the applicant is a good fit for the enterprise given their application. These tasks reduce the processing time from when a person applies to receiving a decision on their application.
Customer service and AI
Customer service divisions can take advantage of AI by using retrieval augmented generation, summarization, and classification. For example, enterprises can incorporate a customer service chatbot on their website that would use generative AI to be more conversational and context specific. Retrieval augmented generation can be used to search through internal documents to answer the customer’s inquiry and generate a tailored output. Summarization can help employees by providing them a brief of the customer’s problem and previous interactions with the company. Text classification can be utilized to classify the customer’s sentiment. These tasks reduce manual labor while improving customer care and retention.
Application modernization and AI
App modernization can also be achieved with the help of summarization and content generation tasks. With a summary of business objectives, developers can spend less time learning about the business playbook and more time coding. IT workers can also create a summary ticket request to quickly address and prioritize issues found in a support ticket. Another way developers can use generative AI is by communicating with large language models (LLMs) in human language and asking the model to generate code. This can help the developer translate code languages, solve bugs, and reduce time spent coding, allowing for more creative ideation.
Powering a knowledge management system with a data lakehouse
Organizations need a data lakehouse to target data challenges that come with deploying an AI-powered knowledge management system. It provides the combination of data lake flexibility and data warehouse performance to help to scale AI. A data lakehouse is a fit-for-purpose data store.
To prepare data for AI, data engineers need the ability to access any type of data across vast amounts of sources and hybrid cloud environments from a single point of entry. A data lakehouse with multiple query engines and storage can allow engineers to share data in open formats. Additionally, engineers can cleanse, transform and standardize data for AI/ML modeling without duplicating or building additional pipelines. Moreover, enterprises should consider lakehouse solutions that incorporate generative AI to help data engineers and non-technical users easily discover, augment and enrich data with natural language. Data lakehouses improve the efficiency of deploying AI and the generation of data pipelines.
AI-powered knowledge management systems hold sensitive data, including HR email automations, marketing video translations and call center transcript analytics. When it comes to this sensitive information, having access to secure data becomes increasingly important. Customers need a data lakehouse that offers built-in centralized governance and local automated policy enforcement, supported by data cataloging, access controls, security and transparency in data lineage.
Through these data foundations set by a data lakehouse solution, data scientists can confidently use governed data to build, train, tune and deploy AI models, ensuring trust and confidence.
Ensure responsible, transparent, and explainable knowledge management systems
As previously mentioned, chatbots are a popular form of generative AI-powered knowledge management system used for customer experience. This application can produce value for an enterprise, but it also poses risk.
For instance, a chatbot for a healthcare company can reduce nurse workloads and improve customer service by answering questions about treatments using known details from previous interactions. However, if data quality is poor or if bias was injected into the model during the fine-tuning or prompt tuning, the model is likely to be untrustworthy. As a result, the chatbot may offer a response to a patient that includes inappropriate language or leaks another patient’s PII.
To prevent this situation from happening, organizations need proactive detection and mitigation of bias and drift when deploying AI models. Having an automatic content filtering capability to detect HAP and PII leakage would reduce the model validator’s burden of manually validating models to ensure they avoid toxic content.
Turn possibility into reality with watsonx
When looking to deploy generative AI models, businesses should join forces with a trusted partner that has created or sourced quality models from quality data—one that allows customization with enterprise data and goals.
IBM watsonx is an integrated AI and data platform with all the capabilities to automate HR processes, enhance customer experiences and modernize the IT workflow to reduce workload. Leverage tools within the platform to store, govern and prepare all your data across the hybrid-cloud. Build and deploy traditional machine learning (ML) and generative AI solutions, with capabilities to manage the entire AI lifecycle.
Instead of having disparate AI solutions, watsonx offers an approach that is open, based on foundation models that are multi-model on multi-cloud and targeted for a range of business use cases. With a variety of models to choose from, that can be curated using proprietary data and company guidelines to achieve responsible AI, watsonx is also trusted and empowering for all AI value creators—offering full control of data and models to create business value.
Book a trial to see the value of your enterprise
The post Scale knowledge management use cases with generative AI appeared first on IBM Blog.