\

AK Bot

Seamless access to data with LLM based agents

Arionkoder developed AK Bot, an AI-powered assistant that enables users to engage in human-like conversations and obtain immediate answers from multiple data sources.

The vision

To simplify how teams interact with data, streamline workflows, and enable better decision-making, allowing organizations to focus on what truly matters.

The Transformation

Arionkoder’s ML experts and backend engineers developed a versatile tool that can be seamlessly integrated into any stage of a company’s workflow as a chatbot. By prioritizing usability, they ensured the tool would reduce the learning curve and encourage adoption over time, making it an ideal solution for blending into diverse working environments.

With AK Bot at the core of their operations, the team now accesses critical data more efficiently, eliminating time spent searching across multiple sources. Real-time insights provided by the bot enhance decision-making and improve collaboration, empowering the team to focus on higher-value, strategic tasks.

AI

LLM

Team

ML engineer

Data Scientist

Back-End Engineer

QA Engineer

Product Owner

Project Manager

Delivery

Slack-based Retrieval Augmented Generation app

Back-end connections to Google Drive, Confluence and HR platform

Dashboard to monitor user metrics

Mapping the questions

By interviewing different stakeholders and potential users of the tool, we identified common types of questions and use cases in which the tool could provide a solution.

We mapped common unanswered questions to which platforms in the company offered this information, and we reviewed their associated APIs.

Getting answers from given data with Retrieval Augmented Generation

We used Retrieval Augmented Generation (RAG), which enables us to exploit generative AI to answer questions based on existing knowledge databases.

RAG uses semantic search based on embeddings to retrieve pieces of information from a database that are relevant for answering the asked question.

Afterwards a generative AI tool such as OpenAI GPT 3.5 is prompted with instructions, the question and contexts, to craft a human-like answer.

Combining multiple sources of information

We used the APIs provided by Confluence, Google Drive and HiBob to retrieve internal documentation, files and templates, and human resources data.

This information was organized into vector databases with embeddings, and tabular data that is updated twice a day. With this we have full control over the contexts provided to the RAG tool.

Instead of using classic GenAI prompting, we implemented LLM-based Agents. Using language as a proxy for reasoning, these tools plan the actions to follow and collect the data to answer a given question, choosing them from a predefined set.

We carefully implemented actions to retrieve data from our different sources, depending on the overall characteristics of the input question. Thus, the Agent is able to choose the right source of information, and follow the necessary steps to get data.

A Slack interface for seamless integration

Since we use Slack for our regular conversations with teammates, it was the ideal channel to communicate with our bot.

We fully integrated our tool on Slack, including features such as, welcoming messages, onboarding instructions, feedback buttons, reactions to answers and even push notifications every two weeks with new features updates.

Our own regression testing framework to always ensure reliable answers

To make sure that any new implemented feature or change in our Agents does not affect the quality of previous functionalities, we implemented a regression testing framework.

Considering that answers are non-deterministic due to the intrinsic nature of Generative AI, we implemented a custom solution based on prompt engineering and LLMs, which uses a series of test sets of questions crafted in advance for every new feature, and automatically verifies which are correctly answered by the bot.

When a new feature is deployed, we trigger the regression testing pipeline to ensure that performance is higher or remains at least the same.

A user-metrics dashboard to monitor usage of the tool

We built a dashboard that allows stakeholders monitor the overall usage of the application, at the same time providing the development team with relevant metrics to understand the behavior of the bot and keep improving it.

Some metrics include number of questions and new users, percentage of feedback provided, positive and negative responses, and many others. This dashboard is dynamic, meaning that as we keep adding new features, we can keep track of them.

"I was honestly surprised by how well AK Bot works. It’s like having a super helpful teammate always ready to assist. Instead of logging into multiple platforms or digging through documentation, I get the answers I need in seconds."

Braulio de León

Lead Engineer @ Arionkoder

Transformation
takeaways

We built a tool that can be rapidly implemented in different contexts to digest knowledge from databases, including organizational reports, minutes from past meetings, customer support info, etc.

AK Bot is a fully functional chatbot that connects users with answers to their questions and knowledge resources in a fraction of a second. With our experience in RAG and Agents, we’re ready to deliver these tools fast and accurately.

© 2024 Arionkoder Works. All rights reserved.

© 2024 Arionkoder Works. All rights reserved.