Skip to main content

llama-index packs tables integration

Project description

Tables Packs

Chain-of-table Pack

This LlamaPack implements the Chain-of-Table paper by Wang et al..

Chain-of-Table proposes the following: given a user query over tabular data, plan out a sequence of tabular operations over the table to retrieve the right information in order to satisfy the user query. The updated table is explicitly used/modified throughout the intermediate chain (unlike chain-of-thought/ReAct which uses generic thoughts).

There is a fixed set of tabular operations that are defined in the paper:

  • f_add_column
  • f_select_row
  • f_select_column
  • f_group_by
  • f_sort_by

We implemented the paper based on the prompts described in the paper, and adapted it to get it working. That said, this is marked as beta, so there may still be kinks to work through. Do you have suggestions / contributions on how to improve the robustness? Let us know!

A full notebook guide can be found here.

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack ChainOfTablePack --download-dir ./chain_of_table_pack

You can then inspect the files at ./chain_of_table_pack and use them as a template for your own project!

Code Usage

We will show you how to import the agent from these files!

from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
ChainOfTablePack = download_llama_pack(
    "ChainOfTablePack", "./chain_of_table_pack"
)

From here, you can use the pack. You can import the relevant modules from the download folder (in the example below we assume it's a relative import or the directory has been added to your system path).

from chain_of_table_pack.base import ChainOfTableQueryEngine, serialize_table

query_engine = ChainOfTableQueryEngine(df, llm=llm, verbose=True)
response = query_engine.query(
    "Who won best Director in the 1972 Academy Awards?"
)

You can also use/initialize the pack directly.

from llm_compiler_agent_pack.base import ChainOfTablePack

agent_pack = ChainOfTablePack(df, llm=llm, verbose=True)

The run() function is a light wrapper around agent.chat().

response = pack.run("Who won best Director in the 1972 Academy Awards?")

Mix-Self-Consistency Pack

This LlamaPack implements the mix self-consistency method proposed in "Rethinking Tabular Data Understanding with Large Language Models" paper by Liu et al.

LLMs can reason over tabular data in 2 main ways:

  1. textual reasoning via direct prompting
  2. symbolic reasoning via program synthesis (e.g. python, SQL, etc)

The key insight of the paper is that different reasoning pathways work well in different tasks. By aggregating results from both with a self-consistency mechanism (i.e. majority voting), it achieves SoTA performance.

We implemented the paper based on the prompts described in the paper, and adapted it to get it working. That said, this is marked as beta, so there may still be kinks to work through. Do you have suggestions / contributions on how to improve the robustness? Let us know!

A full notebook guide can be found here.

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack MixSelfConsistencyPack --download-dir ./mix_self_consistency_pack

You can then inspect the files at ./mix_self_consistency_pack and use them as a template for your own project!

Code Usage

We will show you how to import the module from these files!

from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
MixSelfConsistencyPack = download_llama_pack(
    "MixSelfConsistencyPack", "./mix_self_consistency_pack"
)

From here, you can use the pack. You can import the relevant modules from the download folder (in the example below we assume it's a relative import or the directory has been added to your system path).

from mix_self_consistency_pack.base import MixSelfConsistencyQueryEngine

query_engine = MixSelfConsistencyQueryEngine(df=df, llm=llm, verbose=True)
response = query_engine.query(
    "Who won best Director in the 1972 Academy Awards?"
)

You can also use/initialize the pack directly.

from mix_self_consistency_pack.base import MixSelfConsistencyPack

pack = MixSelfConsistencyPack(df=df, llm=llm, verbose=True)

The run() function is a light wrapper around query_engine.query().

response = pack.run("Who won best Director in the 1972 Academy Awards?")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_packs_tables-0.3.0.tar.gz (14.2 kB view details)

Uploaded Source

Built Distribution

llama_index_packs_tables-0.3.0-py3-none-any.whl (14.5 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_packs_tables-0.3.0.tar.gz.

File metadata

  • Download URL: llama_index_packs_tables-0.3.0.tar.gz
  • Upload date:
  • Size: 14.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_packs_tables-0.3.0.tar.gz
Algorithm Hash digest
SHA256 7f4d7119c4597582e8014d2d8ad8f50fe161a80a5e75b1b22509446040a94481
MD5 2a277f3bdfe00ad9ffd8e279706e4e3e
BLAKE2b-256 6f5c3f4042791f9f9a2d8dcd7321fa0cc1ee0fe091467264501cc68fee63d974

See more details on using hashes here.

File details

Details for the file llama_index_packs_tables-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_packs_tables-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b5182d1ddb9d008c8b43667da6cf628f45dc6f4a43432d27279c4e20e81b6fa1
MD5 745cf6c69ab43ce0494b21ad5964e41c
BLAKE2b-256 cf0e8f01a322e3b4ddbbd760f3ae09452f4552ab6d5e1b579abafa95fd726b00

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page