Skip to main content

Create LLM-powered webapps with ease

Project description

Embedia

Create LLM-powered webapps with ease

Discord · Docs · Blog · Twitter


Webapps you can create using Embedia

  • Chatbots (like ChatGPT) with a permanent memory that can correlate information across multiple chats or access the internet
  • Chatbots with a specific personality (or a combination of personalities)
  • Natural language search over a set of documents / webpages / datasets using Retrieval Augmented Generation (RAG)
  • Customer support bots with access to the FAQs that can guide the user in the right direction
  • Retrieval Augmented Generation (RAG) based recommendation / clustering systems
  • AI Agents that work with you to solve problems
  • Autonomous AI Agents that can mindfully use the tools you provide
  • Entire SAAS products that use LLMs to solve a specific problem

Why choose Embedia?:

  • Developer friendly: Easy to follow documentation, IntelliSense enabled
  • Pre-defined common AI Agents and Tools
  • LLM agnostic: Connect any LLM you want (GPT-4, Bard, Llama, Custom-trained, etc)
  • Vector DB agnostic: Connect any vector DB you want (Weaviate, Pinecone, Chroma, etc)
  • Graph DB agnostic: Connect any graph DB you want (Neo4j, Nebula, etc)
  • Pub-sub based event system to build highly customizable workflows
  • Async: Built from ground up to be async
  • Lightweight: Has very few dependencies and a tiny package size
  • Small dev team with a clear focus on developer experience and scalability

How to use it?

  • You'll need Python 3.8 or higher to use this library
  • Install using pip install embedia

Step 1: Define your Tokenizer class:

import tiktoken
from embedia import Tokenizer

class OpenAITokenizer(Tokenizer):
    async def _tokenize(self, text):
        return tiktoken.encoding_for_model("gpt-3.5-turbo").encode(text)

Step 2: Define your ChatLLM class:

import openai
from embedia import ChatLLM

class OpenAIChatLLM(ChatLLM):
    def __init__(self):
        super().__init__(tokenizer=OpenAITokenizer())
        openai.api_key = "YOUR-API-KEY"

    async def _reply(self):
        completion = await openai.ChatCompletion.acreate(
            model="gpt-3.5-turbo",
            messages=[{'role': msg.role, 'content': msg.content}
                      for msg in self.chat_history],
        )
        return completion.choices[0].message.content

Step 3: Run your AI Agent

import asyncio
from embedia import Persona
from embedia.agents import ToolUser
from embedia.tools import PythonInterpreter

python_coder = OpenAIChatLLM()
asyncio.run(python_coder.set_system_prompt(
    Persona.CodingLanguageExpert.format(language='Python')
))
code = asyncio.run(python_coder(
    'Count number of lines of python code in the current directory'
))
tool_user = ToolUser(chatllm=OpenAIChatLLM(), tools=[PythonInterpreter()])
asyncio.run(tool_user(code))

Quick glance over the library internals

The core classes of Embedia are:

  • Tokenizer: A class that converts text into tokens
  • LLM: A class that interfaces with a next token generation type large language model (eg: text-davinci-003)
  • ChatLLM: A class that interfaces with a chat type large language model (eg: gpt-3.5-turbo)
  • Tool: A class that can convert any python function into a tool that can be used by the Agent
  • EmbeddingModel: A class that interfaces with the Embedding Model (eg: text-embedding-ada-002)
  • VectorDB: A class that interfaces with a vector database (eg: Weaviate)

Pre-defined Tools include:

  • PythonInterpreter: A tool that can run python code in the python interpreter
  • Terminal: A tool that can run shell commands in the terminal
  • 10+ file operations tools: For reading, writing, copying, moving, deleting files / folders

Pre-defined Agents include:

  • ToolUser: LLM powered System-1 thinker that can run tools in a loop by reading their docstrings

Helpers include:

  • Pub-sub based event system for building highly customizable workflows
  • Persona: An enum class containing pre-defined system prompts
  • TextDoc: A class used for dealing with text documents

Learn about them more on our documentation page

How to contribute to the codebase?

This library is under active and rapid development. We'd love your contributions to make it better. To get started, you can check out contributing.md

Become a sponsor

Recurring revenue sponsors will get benefits like:

  • Sponsored Screencasts with code
  • Early access to Embedia's SAAS products
  • Visibility on our website and social media

Partner with us

We'd love to partner with companies and libraries in the AI and web-dev ecosystem. If you'd like to get in touch, we're always active on our Discord server.

License

Copyright - Sudhanshu Passi, 2023 under the the terms of the Apache 2.0 license

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

embedia-0.0.1.tar.gz (35.6 kB view hashes)

Uploaded Source

Built Distribution

embedia-0.0.1-py3-none-any.whl (37.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page