See how to build a RAG powered chatbot to answer HR questions on policies and get consistent, up to date results.

Join us on Tuesday 23rd of July for an online workshop with Superlinked!

Large Language Models (LLMs) predominantly create value for the business world through the RAG methodology. Even though most of the public’s attention is focused on the quality of text these models generate, I argue that developers can improve RAG performance much more efficiently through retrieval. Many of the use cases identified for RAG come from datasets that have both numeric and text data. In our HR Chatbot use case, we have text for the policy, the number of times the article was signalled helpful and timestamp data highlighting the latest policy updates. In my workshop, I aim to show how to combine embeddings from these different data modalities in a single vector to build a high-performing RAG system.

In this workshop session, we will walk you through:

  • How to embed timestamp, numeric “helpfulness” rating and unstructured text data using Superlinked
  • Applying weights at query time to increase the quality and consistency of the results
  • Setting up a RAG system from your data to power a chatbot
  • Connecting to a vector database and a simple deployment to run your application