Skip to main content

Interface between LLMs and your data

Project description

🗂️ LlamaIndex 🦙 (GPT Index)

⚠️ NOTE: We are rebranding GPT Index as LlamaIndex! We will carry out this transition gradually.

2/25/2023: By default, our docs/notebooks/instructions now reference "LlamaIndex" instead of "GPT Index".

2/19/2023: By default, our docs/notebooks/instructions now use the llama-index package. However the gpt-index package still exists as a duplicate!

2/16/2023: We have a duplicate llama-index pip package. Simply replace all imports of gpt_index with llama_index if you choose to pip install llama-index.

LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.

PyPi:

Documentation: https://gpt-index.readthedocs.io/en/latest/.

Twitter: https://twitter.com/gpt_index.

Discord: https://discord.gg/dGcwcsnxhU.

LlamaHub (community library of data loaders): https://llamahub.ai

🚀 Overview

NOTE: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!

Context

  • LLMs are a phenomenonal piece of technology for knowledge generation and reasoning. They are pre-trained on large amounts of publicly available data.
  • How do we best augment LLMs with our own private data?
  • One paradigm that has emerged is in-context learning (the other is finetuning), where we insert context into the input prompt. That way, we take advantage of the LLM's reasoning capabilities to generate a response.

To perform LLM's data augmentation in a performant, efficient, and cheap manner, we need to solve two components:

  • Data Ingestion
  • Data Indexing

Proposed Solution

That's where the LlamaIndex comes in. LlamaIndex is a simple, flexible interface between your external data and LLMs. It provides the following tools in an easy-to-use fashion:

  • Offers data connectors to your existing data sources and data formats (API's, PDF's, docs, SQL, etc.)
  • Provides indices over your unstructured and structured data for use with LLM's. These indices help to abstract away common boilerplate and pain points for in-context learning:
    • Storing context in an easy-to-access format for prompt insertion.
    • Dealing with prompt limitations (e.g. 4096 tokens for Davinci) when context is too big.
    • Dealing with text splitting.
  • Provides users an interface to query the index (feed in an input prompt) and obtain a knowledge-augmented output.
  • Offers you a comprehensive toolset trading off cost and performance.

💡 Contributing

Interesting in contributing? See our Contribution Guide for more details.

📄 Documentation

Full documentation can be found here: https://gpt-index.readthedocs.io/en/latest/.

Please check it out for the most up-to-date tutorials, how-to guides, references, and other resources!

💻 Example Usage

pip install llama-index

Examples are in the examples folder. Indices are in the indices folder (see list of indices below).

To build a simple vector store index:

import os
os.environ["OPENAI_API_KEY"] = 'YOUR_OPENAI_API_KEY'

from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader('data').load_data()
index = GPTSimpleVectorIndex(documents)

To save to and load from disk:

# save to disk
index.save_to_disk('index.json')
# load from disk
index = GPTSimpleVectorIndex.load_from_disk('index.json')

To query:

index.query("<question_text>?")

🔧 Dependencies

The main third-party package requirements are tiktoken, openai, and langchain.

All requirements should be contained within the setup.py file. To run the package locally without building the wheel, simply run pip install -r requirements.txt.

📖 Citation

Reference to cite if you use LlamaIndex in a paper:

@software{Liu_LlamaIndex_2022,
author = {Liu, Jerry},
doi = {10.5281/zenodo.1234},
month = {11},
title = {{LlamaIndex}},
url = {https://github.com/jerryjliu/gpt_index},
year = {2022}
}

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpt_index-0.4.40.tar.gz (183.5 kB view details)

Uploaded Source

Built Distribution

gpt_index-0.4.40-py3-none-any.whl (275.9 kB view details)

Uploaded Python 3

File details

Details for the file gpt_index-0.4.40.tar.gz.

File metadata

  • Download URL: gpt_index-0.4.40.tar.gz
  • Upload date:
  • Size: 183.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for gpt_index-0.4.40.tar.gz
Algorithm Hash digest
SHA256 7d19e32f0bd08861d146866b26e953c45a7b3bc7c006b0b55225b3c8247586a6
MD5 4e4a9f4026cd2af38c0cb252e5b4b975
BLAKE2b-256 943d8ceeebc63420bfe08abf498fbf04e3ec4b1a61f8cc6d58df80039b237b4f

See more details on using hashes here.

File details

Details for the file gpt_index-0.4.40-py3-none-any.whl.

File metadata

  • Download URL: gpt_index-0.4.40-py3-none-any.whl
  • Upload date:
  • Size: 275.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for gpt_index-0.4.40-py3-none-any.whl
Algorithm Hash digest
SHA256 d5b743aabece2d0e8b0590749cd8379b20297e389ca3d7f8b871c2828b06bd2c
MD5 76e3854ec0d0a03268c0a67c89e3095f
BLAKE2b-256 626f4c081423da8d8b894122b65049d478ea578d465480410f2ff9f210fb6769

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page