Skip to main content

Knowledge base service.

Project description


Logo

Dewy - The Knowledgebase for AI

Opinionated knowledge extraction and semantic retrieval for Gen AI applications.
Explore the docs »

Report Bug · Request Feature

About The Project

Dewy helps you build AI agents and RAG applications by managing the extraction of knowledge from your documents and implementing semantic search over the extracted content. Load your documents and Dewy takes care of parsing, chunking, summarizing, and indexing for retrieval. Dewy builds on the lessons of putting real Gen AI applications into production so you can focus on getting 💩 done, rather than comparing vector databases and building data extraction infrastructure.

Below is the typical architecture of an AI agent performing RAG. Dewy handles all of the parts shown in brown so you can focus on your application -- the parts in green.

System architecture showing steps of RAG.

(back to top)

Getting Started

To get a local copy up and running follow these steps.

  1. (Optional) Start a pgvector instance to persist your data

    Dewy uses a vector database to store metadata about the documents you've loaded as well as embeddings used to provide semantic search results.

    docker run -d \
      -p 5432:5432 \
      -e POSTGRES_DB=dewydb \
      -e POSTGRES_USER=dewydbuser \
      -e POSTGRES_PASSWORD=dewydbpwd \
      -e POSTGRES_HOST_AUTH_METHOD=trust \
      ankane/pgvector
    

    If you already have an instance of pgvector you can create a database for Dewy and configure Dewy use it using the DB env var (see below).

  2. Install Dewy

    pip install dewy
    

    This will install Dewy in your local Python environment.

  3. Configure Dewy. Dewy will read env vars from an .env file if provided. You can also set these directly in the environment, for example when configuring an instance running in docker / kubernetes.

    # ~/.env
    ENVIRONMENT=LOCAL
    DB=postgresql://...
    OPENAI_API_KEY=...
    
  4. Fire up Dewy

    dewy
    

    Dewy includes an admin console you can use to create collections, load documents, and run test queries.

    open http://localhost:8000/admin
    

Using Dewy in Typescript / Javascript

  1. Install the API client library

    npm install dewy-ts
    
  2. Connect to an instance of Dewy

    import { Dewy } from 'dewy_ts';
    const dewy = new Dewy()
    
  3. Add documents

    await dewy.kb.addDocument({
      collection_id: 1,
      url: https://arxiv.org/abs/2005.11401”,
    })
    
  4. Retrieve document chunks for LLM prompting

    const context = await dewy.kb.retrieveChunks({
      collection_id: 1,
      query: "tell me about RAG",
      n: 10,
    });
    
    // Minimal prompt example
    const prompt = [
      {
        role: 'system',
        content: `You are a helpful assistant.
        You will take into account any CONTEXT BLOCK that is provided in a conversation.
        START CONTEXT BLOCK
        ${context.results.map((c: any) => c.chunk.text).join("\n")}
        END OF CONTEXT BLOCK
        `,
      },
    ]
    
    // Using OpenAI to generate responses
    const response = await openai.chat.completions.create({
      model: 'gpt-3.5-turbo',
      stream: true,
      messages: [...prompt, [{role: 'user': content: 'Tell me about RAG'}]]
    })
    

Using Dewy in Python

  1. Install the API client library

    pip install dewy-client
    
  2. Connect to an instance of Dewy

    from dewy_client import Client
    dewy = Client(base_url="http://localhost:8000")
    
  3. Add documents

    from dewy_client.api.kb import add_document
    from dewy_client.models import AddDocumentRequest
    await add_document.asyncio(client=dewy, body=AddDocumentRequest(
      collection_id = 1,
      url = https://arxiv.org/abs/2005.11401,
    ))
    
  4. Retrieve document chunks for LLM prompting

    from dewy_client.api.kb import retrieve_chunks
    from dewy_client.modles import RetrieveRequest
    chunks = await retrieve_chunks.asyncio(client=dewy, body=RetrieveRequest(
      collection_id = 1,
      query = "tell me about RAG",
      n = 10,
    ))
    
    # Minimal prompt example
    prompt = f"""
    You will take into account any CONTEXT BLOCK that is provided in a conversation.
      START CONTEXT BLOCK
      {"\n".join([chunk.text for chunk in chunks.text_results])}
      END OF CONTEXT BLOCK
    """
    

See [python-langchain.ipynb'](demos/python-langchain-notebook/python-langchain.ipynb) for an example using Dewy in LangChain, including an implementation of LangChain's BaseRetriever` backed by Dewy.

Roadmap

Dewy is under active development. This is an overview of our current roadmap - please 👍 issues that are important to you. Don't see a feature that would make Dewy better for your application - create a feature request!

  • Support more document formats (ie Markdown, DOCX, HTML)
  • Support more types of chunk extractors
  • Multi-modal search over images, tables, audio, etc.
  • Integrations with LangChain, LlamaIndex, Haystack, etc.
  • Support flexible result ranking (ie rag-fusion, mmr, etc).
  • Provide metrics around which chunks are used, relevance scores, etc.
  • Query history and explorer in the UI.
  • Multi-tenancy
  • Hybrid search

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Development Installation

  1. Clone the repo
    git clone https://github.com/DewyKB/dewy.git
    
  2. Install Python packages
    poetry install
    
  3. Configure Dewy. Dewy will read env vars from an .env file if provided. You can also set these directly in the environment, for example when configuring an instance running in docker / kubernetes.
    cat > .env << EOF
    ENVIRONMENT=LOCAL
    DB=postgresql://...
    OPENAI_API_KEY=...
    EOF
    
  4. Build the frontend
    cd frontend && npm install && npm run build
    
  5. Build the client
    cd dewy-client && poetry install
    
  6. Run the Dewy service
    poetry run dewy
    

Practices

Some skeleton code based on best practices from https://github.com/zhanymkanov/fastapi-best-practices.

The following commands run tests and apply linting. If you're in a poetry shell, you can omit the poetry run:

  • Running tests: poetry run pytest
  • Linting (and formatting): poetry run ruff check --fix
  • Formatting: poetry run ruff format
  • Type Checking: poetry run mypy dewy

To regenerate the OpenAPI spec and client libraries:

poetry poe extract-openapi
poetry poe update-client

(back to top)

Releasing
  1. Look at the draft release to determine the suggested next version.
  2. Create a PR updating the following locations to that version: a. pyproject.toml for dewy b. dewy-client/pyproject.toml for dewy-client c. API version in dewy/config.py d. openapi.yaml and dewy-client by running poe extract-openapi and poe update-client.
  3. Once that PR is in, edit the draft release, make sure the version and tag match what you selected in step 1 (and used in the PR), check "Set as a pre-release" (will be updated by the release automation) and choose to publish the release.
  4. The release automation should kick in and work through the release steps. It will need approval for the pypi deployment environment to publish the dewy and dewy-client packages.

(back to top)

License

Distributed under the Apache 2 License. See LICENSE.txt for more information.

(back to top)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dewy-0.4.0.tar.gz (424.9 kB view details)

Uploaded Source

Built Distribution

dewy-0.4.0-py3-none-any.whl (433.5 kB view details)

Uploaded Python 3

File details

Details for the file dewy-0.4.0.tar.gz.

File metadata

  • Download URL: dewy-0.4.0.tar.gz
  • Upload date:
  • Size: 424.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for dewy-0.4.0.tar.gz
Algorithm Hash digest
SHA256 c0b6a122514ff6f7d975c8d7ba485f480774f315ef594f01546dcf0d947e8fdf
MD5 72a3a8a8e11980f6126aa48260696d72
BLAKE2b-256 2ff44c99e93fca0d03c0751d6c417062bee256969f843ddf0329024562224a49

See more details on using hashes here.

File details

Details for the file dewy-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: dewy-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 433.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for dewy-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 06c410b165646ea1cfd54527f3e881d14fa7c8a95f3d37ce008fa9b262d9203b
MD5 fe91c50402cd64346e2799a2b186c9c6
BLAKE2b-256 c3aaaae24e1fd4ce36b4339fab310e54da232cd6efe4a6c82b84d5d8f936c2f0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page