Skip to main content

Make LLM-powered webapps with ease

Project description

Embedia

Make LLM-powered webapps with ease

Discord · Docs · Blog · Twitter


What is Embedia?

Embedia is a framework for making LLM-powered webapps with ease.

With Python 3.8 or higher set up, install Embedia using:

pip install embedia

Which webapps can be created using Embedia?

Embedia is built keeping in mind the common as well as advanced usecases of LLMs in webapps.

Some advanced usecases include:

  • AI Agents that can run predefined code with custom parameters based on conversation context
  • Natural language search over files, websites, or datasets powered by Retrieval Augmented Generation
  • Coding assistants that can translate, write, run, test, and debug code
  • All the functionalities of OpenAI ChatGPT, but with an opensource LLM like llama-2

Some common usecases include:

  • Chatbots with a personality (similar to character.ai)
  • A panel discussion between multiple personalities. These multiple personalities can also be internal to a single complex chatbot.
  • Language translators, improvers, and correctors
  • Text summarizers
  • Keyword extractors
  • Sentiment analyzers
  • Social media content generators
  • Creative writing assistants
  • Specific planner for a complex task
  • Text based adventure games

Why choose Embedia?

  • Developer friendly: Easy to follow documentation, IntelliSense enabled
  • Pre-defined common AI Agents and Tools
  • LLM agnostic: Our universal APIs are LLM independent, it can be used with any LLM - whether you're using a service provider like OpenAI, Anthropic, Google or have deployed your own open-source model like Llama-2, Falcon, or Vicuna.
  • DB agnostic: Our APIs are also independent of what Vector database (or Graph Database) you want to connect to your web application. Your vector database might be managed by a cloud provider like Weaviate, Pinecone or ElasticSearch. Or it might be hosted on a docker container besides your webapp.
  • Pub-sub based event system to build highly customizable workflows
  • Async: Built from ground up to be asynchronus. It works out of the box with asynchronomous web frameworks like FastAPI, Starlette, Sanic, etc.
  • Lightweight: Keeping production use-cases in mind, we have kept the library's dependencies to a minimum. This makes it a very lightweight component in your webstack.
  • Small dev team with a clear focus on developer experience and scalability

How to use it?

Step 1: Connect your LLM

import openai
from embedia import ChatLLM


class OpenAIChatLLM(ChatLLM):
    def __init__(self):
        super().__init__()
        openai.api_key = "OPENAI_API_KEY"

    async def _reply(self, prompt):
        completion = await openai.ChatCompletion.acreate(
            model="gpt-3.5-turbo",
            temperature=0.1,
            messages=[
                {"role": msg.role, "content": msg.content} for msg in self.chat_history
            ],
        )
        return completion.choices[0].message.content

Step 2: Use your AI Agent

import asyncio
from embedia.agents import ToolUserAgent
from embedia.tools import PythonInterpreterTool, TerminalTool

# Create an AI agent and give it Tools
ai_agent = ToolUserAgent(
    chatllm=OpenAIChatLLM(), tools=[PythonInterpreterTool(), TerminalTool()]
)

# Ask the AI agent to solve a problem for you
question = "Find the number of lines of code in main.py"
asyncio.run(ai_agent(question))

Quick glance over the library internals

The core classes of Embedia are:

  • Tokenizer: A class that converts text into tokens
  • LLM: A class that interfaces with a next token generation type large language model (eg: text-davinci-003)
  • ChatLLM: A class that interfaces with a chat type large language model (eg: gpt-3.5-turbo)
  • Tool: A class that can convert any python function into a tool that can be used by the Agent
  • EmbeddingModel: A class that interfaces with the Embedding Model (eg: text-embedding-ada-002)
  • VectorDB: A class that interfaces with a vector database (eg: Weaviate)

Pre-defined Tools include:

  • PythonInterpreterTool: A tool that can run python code in the python interpreter
  • TerminalTool: A tool that can run shell commands in the terminal
  • 10+ file operations tools: For reading, writing, copying, moving, deleting files / folders

Pre-defined Agents include:

  • ToolUserAgent: LLM powered System-1 thinker that can run tools in a loop by reading their docstrings

Helpers include:

  • Pub-sub based event system for building highly customizable workflows
  • Persona: An enum class containing pre-defined system prompts
  • TextDoc: A class used for dealing with text documents

Learn about them more on our documentation page

How to contribute to the codebase?

This library is under active and rapid development. We'd love your contributions to make it better. To get started, you can check out contributing.md

Become a sponsor

Recurring revenue sponsors will get benefits like:

  • Sponsored Screencasts with code
  • Early access to Embedia's SAAS products
  • Visibility on our website and social media

Partner with us

We'd love to partner with companies and libraries in the AI and web-dev ecosystem. If you'd like to get in touch, we're always active on our Discord server.

License

Copyright - Sudhanshu Passi, 2023 under the the terms of the Apache 2.0 license

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

embedia-0.0.3.tar.gz (36.4 kB view details)

Uploaded Source

Built Distribution

embedia-0.0.3-py3-none-any.whl (37.9 kB view details)

Uploaded Python 3

File details

Details for the file embedia-0.0.3.tar.gz.

File metadata

  • Download URL: embedia-0.0.3.tar.gz
  • Upload date:
  • Size: 36.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for embedia-0.0.3.tar.gz
Algorithm Hash digest
SHA256 11547f2c7d512c33ddcbb9bb19c62388873bf6e21555a801f5edb8cfad295e55
MD5 abcfaac01ef3f0093364fe193b33f48d
BLAKE2b-256 e268cd750009a4612359360d79d3a260302e1eeb915044b87c828514d9caa36c

See more details on using hashes here.

File details

Details for the file embedia-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: embedia-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 37.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for embedia-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 1e9d78845d073c8cc1164556f889979faba147ead176b43f439d3e638d74ba27
MD5 89bc669e1bf97e511298a2afd19a27b4
BLAKE2b-256 7083c9d7da499536c04b31689616af89a8a6b733f967b3f1ea6de2ac8f0944a2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page