Skip to main content

Interface between LLMs and your data

Project description

🗂️ LlamaIndex 🦙 (GPT Index)

⚠️ NOTE: We are rebranding GPT Index as LlamaIndex! We will carry out this transition gradually.

2/25/2023: By default, our docs/notebooks/instructions now reference "LlamaIndex" instead of "GPT Index".

2/19/2023: By default, our docs/notebooks/instructions now use the llama-index package. However the gpt-index package still exists as a duplicate!

2/16/2023: We have a duplicate llama-index pip package. Simply replace all imports of gpt_index with llama_index if you choose to pip install llama-index.

LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.

PyPi:

Documentation: https://gpt-index.readthedocs.io/en/latest/.

Twitter: https://twitter.com/gpt_index.

Discord: https://discord.gg/dGcwcsnxhU.

LlamaHub (community library of data loaders): https://llamahub.ai

🚀 Overview

NOTE: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!

Context

  • LLMs are a phenomenonal piece of technology for knowledge generation and reasoning. They are pre-trained on large amounts of publicly available data.
  • How do we best augment LLMs with our own private data?
  • One paradigm that has emerged is in-context learning (the other is finetuning), where we insert context into the input prompt. That way, we take advantage of the LLM's reasoning capabilities to generate a response.

To perform LLM's data augmentation in a performant, efficient, and cheap manner, we need to solve two components:

  • Data Ingestion
  • Data Indexing

Proposed Solution

That's where the LlamaIndex comes in. LlamaIndex is a simple, flexible interface between your external data and LLMs. It provides the following tools in an easy-to-use fashion:

  • Offers data connectors to your existing data sources and data formats (API's, PDF's, docs, SQL, etc.)
  • Provides indices over your unstructured and structured data for use with LLM's. These indices help to abstract away common boilerplate and pain points for in-context learning:
    • Storing context in an easy-to-access format for prompt insertion.
    • Dealing with prompt limitations (e.g. 4096 tokens for Davinci) when context is too big.
    • Dealing with text splitting.
  • Provides users an interface to query the index (feed in an input prompt) and obtain a knowledge-augmented output.
  • Offers you a comprehensive toolset trading off cost and performance.

💡 Contributing

Interesting in contributing? See our Contribution Guide for more details.

📄 Documentation

Full documentation can be found here: https://gpt-index.readthedocs.io/en/latest/.

Please check it out for the most up-to-date tutorials, how-to guides, references, and other resources!

💻 Example Usage

pip install llama-index

Examples are in the examples folder. Indices are in the indices folder (see list of indices below).

To build a simple vector store index:

import os
os.environ["OPENAI_API_KEY"] = 'YOUR_OPENAI_API_KEY'

from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader('data').load_data()
index = GPTSimpleVectorIndex.from_documents(documents)

To save to and load from disk:

# save to disk
index.save_to_disk('index.json')
# load from disk
index = GPTSimpleVectorIndex.load_from_disk('index.json')

To query:

index.query("<question_text>?")

🔧 Dependencies

The main third-party package requirements are tiktoken, openai, and langchain.

All requirements should be contained within the setup.py file. To run the package locally without building the wheel, simply run pip install -r requirements.txt.

📖 Citation

Reference to cite if you use LlamaIndex in a paper:

@software{Liu_LlamaIndex_2022,
author = {Liu, Jerry},
doi = {10.5281/zenodo.1234},
month = {11},
title = {{LlamaIndex}},
url = {https://github.com/jerryjliu/gpt_index},
year = {2022}
}

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpt_index-0.5.1.tar.gz (191.2 kB view details)

Uploaded Source

Built Distribution

gpt_index-0.5.1-py3-none-any.whl (289.4 kB view details)

Uploaded Python 3

File details

Details for the file gpt_index-0.5.1.tar.gz.

File metadata

  • Download URL: gpt_index-0.5.1.tar.gz
  • Upload date:
  • Size: 191.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for gpt_index-0.5.1.tar.gz
Algorithm Hash digest
SHA256 ef900ceac43af107307cbf01c5c3d590f87ae1c136ea013ce43855979f4741e7
MD5 26ea55672fb28dde7b6d4a3f476fb21f
BLAKE2b-256 e25709164aa43e5c50195c39cd398c45ccf3698badf1e908e5d3b26e4e60f820

See more details on using hashes here.

File details

Details for the file gpt_index-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: gpt_index-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 289.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for gpt_index-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4162bb2caf5b597b77d2fe07327e756c58ed3b2bd653318c17bdcb796a366684
MD5 4e57cf0111d6b83106b3abf27bd9e9d7
BLAKE2b-256 e2abaadeb3ec77ac3dd62c85e2b22944dba2f0b0e1cd716dcbb9eba93a4c28bf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page