Skip to main content

Interface between LLMs and your data

Project description

🗂️ LlamaIndex 🦙 (GPT Index)

⚠️ NOTE: We are rebranding GPT Index as LlamaIndex! We will carry out this transition gradually.

2/25/2023: By default, our docs/notebooks/instructions now reference "LlamaIndex" instead of "GPT Index".

2/19/2023: By default, our docs/notebooks/instructions now use the llama-index package. However the gpt-index package still exists as a duplicate!

2/16/2023: We have a duplicate llama-index pip package. Simply replace all imports of gpt_index with llama_index if you choose to pip install llama-index.

LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.

PyPi:

Documentation: https://gpt-index.readthedocs.io/en/latest/.

Twitter: https://twitter.com/gpt_index.

Discord: https://discord.gg/dGcwcsnxhU.

LlamaHub (community library of data loaders): https://llamahub.ai

🚀 Overview

NOTE: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!

Context

  • LLMs are a phenomenonal piece of technology for knowledge generation and reasoning. They are pre-trained on large amounts of publicly available data.
  • How do we best augment LLMs with our own private data?
  • One paradigm that has emerged is in-context learning (the other is finetuning), where we insert context into the input prompt. That way, we take advantage of the LLM's reasoning capabilities to generate a response.

To perform LLM's data augmentation in a performant, efficient, and cheap manner, we need to solve two components:

  • Data Ingestion
  • Data Indexing

Proposed Solution

That's where the LlamaIndex comes in. LlamaIndex is a simple, flexible interface between your external data and LLMs. It provides the following tools in an easy-to-use fashion:

  • Offers data connectors to your existing data sources and data formats (API's, PDF's, docs, SQL, etc.)
  • Provides indices over your unstructured and structured data for use with LLM's. These indices help to abstract away common boilerplate and pain points for in-context learning:
    • Storing context in an easy-to-access format for prompt insertion.
    • Dealing with prompt limitations (e.g. 4096 tokens for Davinci) when context is too big.
    • Dealing with text splitting.
  • Provides users an interface to query the index (feed in an input prompt) and obtain a knowledge-augmented output.
  • Offers you a comprehensive toolset trading off cost and performance.

💡 Contributing

Interesting in contributing? See our Contribution Guide for more details.

📄 Documentation

Full documentation can be found here: https://gpt-index.readthedocs.io/en/latest/.

Please check it out for the most up-to-date tutorials, how-to guides, references, and other resources!

💻 Example Usage

pip install llama-index

Examples are in the examples folder. Indices are in the indices folder (see list of indices below).

To build a simple vector store index:

import os
os.environ["OPENAI_API_KEY"] = 'YOUR_OPENAI_API_KEY'

from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader('data').load_data()
index = GPTSimpleVectorIndex.from_documents(documents)

To save to and load from disk:

# save to disk
index.save_to_disk('index.json')
# load from disk
index = GPTSimpleVectorIndex.load_from_disk('index.json')

To query:

index.query("<question_text>?")

🔧 Dependencies

The main third-party package requirements are tiktoken, openai, and langchain.

All requirements should be contained within the setup.py file. To run the package locally without building the wheel, simply run pip install -r requirements.txt.

📖 Citation

Reference to cite if you use LlamaIndex in a paper:

@software{Liu_LlamaIndex_2022,
author = {Liu, Jerry},
doi = {10.5281/zenodo.1234},
month = {11},
title = {{LlamaIndex}},
url = {https://github.com/jerryjliu/gpt_index},
year = {2022}
}

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpt_index-0.5.3.tar.gz (194.3 kB view details)

Uploaded Source

Built Distribution

gpt_index-0.5.3-py3-none-any.whl (294.0 kB view details)

Uploaded Python 3

File details

Details for the file gpt_index-0.5.3.tar.gz.

File metadata

  • Download URL: gpt_index-0.5.3.tar.gz
  • Upload date:
  • Size: 194.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for gpt_index-0.5.3.tar.gz
Algorithm Hash digest
SHA256 f20fab8cb1a63f9b9def70741ef5c9aff1cb56446e9fb4a9810a6ecd9e71b00f
MD5 c2fa715f119d8a9479be701d8bff4b1d
BLAKE2b-256 3ab76796120c03f17e671ccbbb5dea51f9d7e3c878becc0418b8ac5c99aff814

See more details on using hashes here.

File details

Details for the file gpt_index-0.5.3-py3-none-any.whl.

File metadata

  • Download URL: gpt_index-0.5.3-py3-none-any.whl
  • Upload date:
  • Size: 294.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for gpt_index-0.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 81dd19d397323debef45e83d6db75f793b7f6a2522e66b0029e9a1f0ed51ec5f
MD5 8bb99940e7337bffe43877015faa3004
BLAKE2b-256 e819cf6a7523c297a578635fd7aae8576ec44470ad7b67e004f56a515d4a3e54

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page