GenAI, RAG, and the Semantic Layer

Generative AI (GenAI) has made a big splash with the emergence of ChatGPT and other impressive generative text, visual and audio tools now available. You might ask, so how can my business make use of it all? A great example is an Enterprise Chatbot, where any user can ask questions of their business, and get the appropriate response. However  pre-trained Large Language Models (LLM) alone are not enough, it needs your companies data! Which is where RAG steps in.

What is RAG?

Why is RAG important for my GenAI Agent?

Delivering a RAG system with Zetaris Semantic Layer

What is RAG?

RAG stands for Retrieval Augmented Generation, which is an approach used in large language models to enhance their ability by accessing and utilizing external information when answering questions or performing tasks.

The "Retrieval" part involves retrieving relevant information from external sources like websites, databases, datalakes, knowledge bases etc. This allows the model to go beyond its pre-trained knowledge.

The "Augmented" part takes the retrieved information and conditions or primes the language model with this extra context before generating an output.

The "Generation" part is where the language model uses its natural language abilities along with the augmented context to produce a final output like an answer, analysis, or generated text.

Why is RAG important for my GenAI Agent?

RAG is useful because while large language models have a lot of general knowledge from pre-training, they can't know everything about your business and having mechanisms to quickly retrieve and leverage relevant external data allows them to produce more up-to-date, complete and accurate outputs, especially for queries requiring very specific or latest information not contained in the training data.

In the business context, your enterprises varied landscape of datasources and systems is what your RAG should be built from, with it, it will allow you to deploy Natural Language applications, like ChatBots, Virtual Assistants and Natural Language Analytic agents. 

When done properly, users from all industries will see benefit, from health, automotive, telecommunications, finance, education, manufacturing and retail. Being able to query your enterprise's full set of data allows a nurse or a clinician, to ask questions like,

"How many beds are available in Ward F right now, and where are they?"

Or for an analyst or manager, could quickly ask,

"What is the average productivity percentage of my team for the week, and how does that compare to our target? Are there any outliers?

Or a customer to ask of their account,

"I am not happy with the amount I am saving, what should I be changing in my spending habits to fix this?"

and get a response that looks at past interactions, and your full transactions history, to generate an articulate response based on your secured, and personal information.

The opportunities for using Generative AI and RAG in your organisation are just about endless.

Delivering a RAG system with Zetaris

Being able to deliver a RAG system to an enterprise is not easily done without the right tools. The best RAG system, will consider all data sources that are related to the question being asked, this is where Zetaris steps in.

Zetaris uses a decentralized approach to securely access, prepare and deliver your data, at scale. Fundamentally, Zetaris can connect to a range of data sources, of varying types, so that your organisation has a single access point, or source of truth, for their data, via its domain specific Semantic Layers.

Screen Shot 2024-08-16 at 10.52.55 pm

Take the domain "customer" for example, the insight of a customer might span across your companies warehouse, its cloud or on-prem lakehouse, a CRM platform, and a legacy system. Zetaris can connect to each one of those platforms, and build consistent reference model, the Semantic Layer, that your organisation can refer to. See the Customer_360 Semantic Layer below as an example.

Then in the context of RAG, this Semantic Layer is what we will build our RAG from to give your LLM the most contextual reference point it could need.

Zetaris comes with its own CoPilot, called GenZ, that you can plug into your deployment with varying levels of customisation around which LLM to use and where it is located, either public or private. The high-level design is shown below.

Screen Shot 2024-08-17 at 11.39.08 pm

Further Zetaris will implement feedback training, and many LLM configurations and fine-tuning capabilities so that you have complete control towards the answers being provided.

Reach out to us today for a demonstration, and to learn more.