Skip to main content

llama-index packs tables integration

Project description

Tables Packs

Chain-of-table Pack

This LlamaPack implements the Chain-of-Table paper by Wang et al..

Chain-of-Table proposes the following: given a user query over tabular data, plan out a sequence of tabular operations over the table to retrieve the right information in order to satisfy the user query. The updated table is explicitly used/modified throughout the intermediate chain (unlike chain-of-thought/ReAct which uses generic thoughts).

There is a fixed set of tabular operations that are defined in the paper:

  • f_add_column
  • f_select_row
  • f_select_column
  • f_group_by
  • f_sort_by

We implemented the paper based on the prompts described in the paper, and adapted it to get it working. That said, this is marked as beta, so there may still be kinks to work through. Do you have suggestions / contributions on how to improve the robustness? Let us know!

A full notebook guide can be found here.

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack ChainOfTablePack --download-dir ./chain_of_table_pack

You can then inspect the files at ./chain_of_table_pack and use them as a template for your own project!

Code Usage

We will show you how to import the agent from these files!

from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
ChainOfTablePack = download_llama_pack(
    "ChainOfTablePack", "./chain_of_table_pack"
)

From here, you can use the pack. You can import the relevant modules from the download folder (in the example below we assume it's a relative import or the directory has been added to your system path).

from chain_of_table_pack.base import ChainOfTableQueryEngine, serialize_table

query_engine = ChainOfTableQueryEngine(df, llm=llm, verbose=True)
response = query_engine.query(
    "Who won best Director in the 1972 Academy Awards?"
)

You can also use/initialize the pack directly.

from llm_compiler_agent_pack.base import ChainOfTablePack

agent_pack = ChainOfTablePack(df, llm=llm, verbose=True)

The run() function is a light wrapper around agent.chat().

response = pack.run("Who won best Director in the 1972 Academy Awards?")

Mix-Self-Consistency Pack

This LlamaPack implements the mix self-consistency method proposed in "Rethinking Tabular Data Understanding with Large Language Models" paper by Liu et al.

LLMs can reason over tabular data in 2 main ways:

  1. textual reasoning via direct prompting
  2. symbolic reasoning via program synthesis (e.g. python, SQL, etc)

The key insight of the paper is that different reasoning pathways work well in different tasks. By aggregating results from both with a self-consistency mechanism (i.e. majority voting), it achieves SoTA performance.

We implemented the paper based on the prompts described in the paper, and adapted it to get it working. That said, this is marked as beta, so there may still be kinks to work through. Do you have suggestions / contributions on how to improve the robustness? Let us know!

A full notebook guide can be found here.

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack MixSelfConsistencyPack --download-dir ./mix_self_consistency_pack

You can then inspect the files at ./mix_self_consistency_pack and use them as a template for your own project!

Code Usage

We will show you how to import the module from these files!

from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
MixSelfConsistencyPack = download_llama_pack(
    "MixSelfConsistencyPack", "./mix_self_consistency_pack"
)

From here, you can use the pack. You can import the relevant modules from the download folder (in the example below we assume it's a relative import or the directory has been added to your system path).

from mix_self_consistency_pack.base import MixSelfConsistencyQueryEngine

query_engine = MixSelfConsistencyQueryEngine(df=df, llm=llm, verbose=True)
response = query_engine.query(
    "Who won best Director in the 1972 Academy Awards?"
)

You can also use/initialize the pack directly.

from mix_self_consistency_pack.base import MixSelfConsistencyPack

pack = MixSelfConsistencyPack(df=df, llm=llm, verbose=True)

The run() function is a light wrapper around query_engine.query().

response = pack.run("Who won best Director in the 1972 Academy Awards?")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_packs_tables-0.3.1.tar.gz (14.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_packs_tables-0.3.1-py3-none-any.whl (15.4 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_packs_tables-0.3.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_packs_tables-0.3.1.tar.gz
Algorithm Hash digest
SHA256 c2a359df350c7a3ba008c732a931cba4ed24caa63c5dcb79db8bf358c8a31e98
MD5 a186102a1caf7e67b849fe34c8df832a
BLAKE2b-256 7d1d84b97d1a93d3beedd1ea92a89b8591c3f99f3299c39a867c1842f1e55817

See more details on using hashes here.

File details

Details for the file llama_index_packs_tables-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_packs_tables-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3c2638984811216af2dab1d9baac949ab5af74bdac6e425084ce3e7de78fab09
MD5 94f4f06192ef3d5f39980c7842a85c23
BLAKE2b-256 091b962dc79b2e5ca92eddf43b3e8c098912d560648cfff22bbd8c9f18570309

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page