BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Reality Check The AI Hype With These Three R's

Forbes Technology Council
Updated May 10, 2024, 03:10pm EDT

CEO and cofounder of Glean.

2023 underscored AI's potential to help people accomplish incredible things, with Statista noting that the global generative AI market size nearly doubled between 2022 and 2023.

As organizations work to implement AI in 2024, CIOs need to consider three key factors in their decisions to ensure effective, safe and sustainable AI usage: reasoning, retrieval and reliability. Without implementing these factors, organizations risk AI hallucinations, inaccurate or out-of-date responses, and sensitive data getting into the hands of the wrong recipients.

Reasoning

Reasoning is a large language model's (LLM) capability to deliver a correct and coherent response. Reasoning works to understand the nuances of a query and prevent AI pitfalls. As it stands today, however, the reasoning capability of some of the leading LLMs on the market is only about as advanced as a high schooler.

As humans, we can apply our past experiences, learnings and interactions from our daily lives to inform our future decisions. However, basic generative AI platforms don't come equipped with our unique knowledge and background from past experiences. This inability to learn from experience results in false reasoning, also known as "hallucinating," which delivers false or misleading answers. LLMs with strong reasoning can empower users to derive impactful, relevant results to accelerate workflows and reduce lost time to hours of searching.

Retrieval

If an LLM's reasoning is an area for AI improvement, a robust retrieval framework—also known as retrieval augmented generation (RAG)—is critical to a solution. RAG ensures LLMs source traceable and referenceable information to deliver trustworthy and tailored answers to users.

The RAG process starts with the retrieval phase. In this stage, a knowledge retrieval solution will search for and source the most relevant and freshest information to answer a user's question. After the retrieval solution sources the most pertinent information, LLMs can use the right sources of knowledge to construct responses to the user's question.

The importance of RAG becomes especially clear when thinking about AI in the workplace. For example, I'm a new employee trying to figure out my company's benefits package for the year ahead. I need an AI assistant capable of referencing information for the latest package instead of packages from years prior. I may also want to know the best HR representative at my company to contact with any follow-up questions.

In this scenario, RAG will sift through my company's different knowledge bases such as Slack or Google Drive to pull the most relevant information that I have permission to access. The LLM can then use this information to deliver a coherent, well-informed response. Other solutions without a good retrieval system may be incapable of determining these critical details, providing employees with outdated and incorrect responses that cause delays rather than accelerating the process.

Reliability

With reasoning and retrieval squared away, we can look at what most companies have their eyes on right now: security. If we want AI to live up to the potential of becoming a user-specific assistant, LLMs require a comprehensive database of personal information to reference. How can we rely on AI to ensure that proprietary data remains secure?

Ensuring a user's data security requires that any information the model receives strictly follows each document's unique permission rules. Without this guarantee, users' trust in how the AI platform secures sensitive data diminishes. User insecurity around privacy can lead to a loss of confidence in the platform, hinder usability and delay the adoption of potentially new technologies.

An understanding of permissions allows users to confidently rely on the AI platform to access and use data—with the peace of mind of knowing their information isn't accessible to other users. For instance, Glean's AI assistant is fully permissions-aware and personalized, only sourcing information to which a user has explicit access. Our permission awareness maintains the security and confidentiality of sensitive data and enhances the relevance and personalization of the AI's output.

Getting Started

As the promise of generative AI's potential continues to unfold and people and organizations look for ways to effectively apply AI in their day-to-day lives, reasoning, retrieval and reliability will become paramount to mainstream adoption. CIOs must prepare themselves and their organizations to implement these modern generative AI solutions and strategies.

To begin, CIOs should first evaluate the integrity of their company's data (both structured and unstructured) across all of the tools their organization uses. As mentioned above, thorough documentation and data are the foundation of a successful generative AI solution. Without a comprehensive and reliable set of data to support a knowledge retrieval solution, there's a significant risk of receiving inaccurate or "hallucinated" responses.

After assessing their organization's database, CIOs should draft companywide guidelines advising on responsible AI use. There's no one-size-fits-all set of guidelines that will work for every organization, so it's essential for IT leaders to work with teams across their company to understand how they might use these tools and draft their guidelines accordingly.

Most importantly, CIOs should recognize that their organization's AI processes and use cases will likely evolve as AI advances. CIOs should consistently experiment and iterate to explore innovative AI solutions and drive progress.

By placing reasoning, retrieval and reliability at the forefront, CIOs can offer a product their users can trust, and users can harness the power of AI to drive productivity.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on LinkedInCheck out my website