Understanding Rebyte Architecture


Backend subroutine for your AI application.
In Rebyte, we define agent as a serverless API that can be executed on cloud, usually agents will leverage AI models to perform some tasks to achieve its intelligence, but this is not required. An agent without any AI model is just like a normal serverless API, but we will focus on AI agents in this document. Here are some typical examples of AI agents:
  • Based on user's query, find most relevant information from user's knowledge base, and summarize the result and return summary to user.
  • User describes a database query in natural language, agent will translate the query into SQL and execute the query on user's database to get results, then use LLM to generate a summary of the results and return to user.
  • Help user to do professional translation between two languages, user can describe the translation task in natural language, agent will not only do the translation but also evaluate the translation quality, if the quality is not good enough, agent will iterate the translation process until the quality is good enough.
There are two types of agents defined in ReByte:

Stack Agent

  • Stack Agent is a piece of sequential actions that can be executed on the LLM serverless runtime. It is the core building block of ReByte, and the main way for end users to create their own tools. Rebyte provides a GUI builder for end users to create/edit their own LLM agents. Rebyte provides a list of pre-built actions for common use cases, also private SDK for software engineer to build their own actions, and seamlessly integrate with the agent builder. Pre-built actions includes:
    • LLM Actions
      • Language Model Completion Interface
      • Language Model Chat Interface
    • Data Actions
      • Dataset Loader, load pre defined datasets for later processing
      • File Loader, extract/transform/load user's provided files
      • Semantic Search, search for similar content over user's knowledge base
    • Tools Actions
      • Search Engine, search for information on Google/Bing
      • Web Crawler, crawl web pages and extract information
      • Http Request Maker, make any http request to any public/private API
    • Control flow Actions
      • Loop Until, run actions until a condition is met
      • Parallel, execute multiple actions in parallel
      • Vanilla Javascript, execute any vanilla javascript code, useful for doing pure data transformation
    • Code Interpreter Actions
      • Without relying on OpenAI, rebyte provides a code interpreter that can execute javascript code.

Group Agent

  • Group Agent is a group of Stack Agents, the biggest difference between Group Agent and Stack Agent is that state transition between Stack Agents are solely controlled by LLM, yielding a non-deterministic behavior. AutoGpt is a great example of Group Agent. In Rebyte we build in a Group Agent builder that allows users to create their own Group Agents.

App Builder - build your own tools

End user facing UI for your AI application.
  • We believe chat UI might be the best UI for end user to interact with LLM, but definitely not the only UI. Rebyte provides builders for end users to build their own UI for their LLM agents. We call this customized UI. Customized UI can be any valid UI code generated by LLM, rebyte hosts the UI code and provide a URL for end users to access the UI.
  • Rebyte customized UI builder can be magically integrated with LLM agents, so that end users can easily build their own tools without the need to write a single line of code.

Knowledge - capture private data

Ingredient for your AI application.
  • Knowledge is private data that is stored in rebyte managed vector database. Rebyte currently provides following connectors for end users to import their knowledge:
    • Local file, supported file types are:
      • "doc", "docx", "img", "epub", "jpeg", "jpg", "png", "xls", "xlsx", "ppt", "pptx", "md", "txt", "rtf", "rst", "pdf", "json", "html"
    • Notion
    • Discord
    • GitHub
    • More connectors are coming soon
  • Knowledge can be used in LLM Agents to do semantic search, or to do data augmentation. A great example is to use knowledge to do semantic search on a user's private knowledge base, and use the search result to do data augmentation for a language model, aka Retrieval Augmented Generation.