NashTech Blog

Meet AgenticRAG: Building Conversational UI Endpoints

Table of Contents
Hi! How may I help you?

Welcome back to our journey to AgenticRAG! The last article, Meet AgenticRAG: Asking Meaningful Questions to the AI Agent, provides a step-by-step guide to wire the embeddings (indexed documents), present in MongoDB Atlas, into a RAG search query of AskAkkaAgenticAiAgent.

With the wiring in place, the next step is to build interactive UI endpoints for conversational interaction with the AI Agent. This article, will guide us on adding the endpoints to provide a client-friendly API in front of all of the RAG components we have built in previous articles.

Integrate Session History View

LLM’s by design are stateless. Their response to anything is directly proportional to the prompt submitted. However, web-interface provided by services like Chat-GPT, Google Gemini, or Microsoft Copilot, keeps a history of User conversation to maintain the context across multiple prompts.

Similarly, an Akka Agent has a built-in session memory to store both – User Message (prompt) and response of the AI Model (Chat-GPT, Google Gemini, etc.). These stored messages are then included as additional context in subsequent requests to the model (prompts), to refer previous parts of the conversation.

To achieve this we need a way to pull a conversation history for a given user. Hence, we need a ConversationHistoryView that will help us in viewing the chat log (history).

Note: Here we are retrieving the full history of all sessions of a User. However, for efficiency purpose you can retrieve last 5 or 10 messages. Since, maintaining history of all sessions of all Users might overload the application.

Append Users API

Next we need to append an endpoint which we can use to get the list of sessions of a User.

Append the UI Endpoint

Lastly, we need to append an endpoint that will serve the static UI (index.html). The index.html file will help us view the response in a user-friendly manner instead of being visible incrementally, like in previous articles.

Lets Ask Question!

1. Set OpenAI API Key as environment variable

2. Start the service locally

3. Use Browser to Ask Question

Conclusion

With this article, we have gone through the process of building a RAG application using Akka Agentic AI. Now, you can play with it, break it, or even use it for a whole new scenario. The idea is to get your hands dirty with it and learn a whole lot of different features Akka Agentic AI offers. For those who would like to deep dive into code, they can refer to the following link – Akka Agentic RAG.

Further Reading

Picture of Himanshu Gupta

Himanshu Gupta

Himanshu Gupta is a Principal Architect passionate about building scalable systems, AI‑driven solutions, and high‑impact digital platforms. He enjoys exploring emerging technologies, writing technical articles, and creating accelerators that help teams move faster. Outside of work, he focuses on continuous learning and sharing knowledge with the tech community.

Leave a Comment

Your email address will not be published. Required fields are marked *

Suggested Article

Scroll to Top