Skip to main content

Convenient LLM chat wrapper for data pipelines, CI/CD, or personal workspaces.

Project description

LLMT

Convenient LLM chat wrapper for data pipelines, CI/CD, or personal workspaces.

Supports local function calling, chat history retention, and can run anywhere. Chat using a terminal, input/output files, or directly through LLMT API.

Usage

from myfunctions import function

# 1. fetch table metadata
# 2. run query
# 3. analize results

tools = [
    {
        "function": {
            "required": ["value1", "value2"],
            "name": "add_decimal_values",
            "parameters": {
                "type": "object",
                "properties": {
                    "value2": {
                        "type": "integer",
                        "description": "The second decimal value to add. For example, 10",
                    },
                    "value1": {
                        "type": "integer",
                        "description": "The first decimal value to add. For example, 5",
                    },
                },
            },
            "description": "Add two decimal values and return the result.\n",
        },
        "type": "function",
    }
]

llmt = LLMT()
llmt.init_assistant(
    "dataengineer",
    api_key="...",
    model="gpt-3.5-turbo",
    assistant_description=(
        " ".join(
            [
                "You are a data engineer, and an expert with python,",
                "sqlalchemy, pandas, and snowflake. Answer questions",
                "briefly in a sentence or less.",
            ]
        )
    ),
    tools=tools,
)
llmt.init_chat("single_chat")
response = llmt.run(
    "What's the result of 22 plus 5 in decimal added to the hexadecimal number A?",
    functions=functions,
)

In a workspace

  • Install Docker and make command.
  • Optionally create custom functions in the udf/ directory and import them in cli.py.
  • Update or create a new configuration file in configs/.
  • Make sure the configuration file describes your custom functions in assistants.tools.
  • Run make run.
  • Use files/input.md to send messages.
  • Use files/output.md to receive messages.
  • CTRL + C to quit out of the container and clean up orphans.

Configuration file

If both (input_file, output_file) are ommited, then the default terminal will be used. Using the input and output files to converse with an LLM is easier than using the terminal.

  • input_file: specify a file for user input
  • output_file: specify a file for LLM response
  • assistants:
    • type: Assistant type, currently only OpenAI.
    • assistant_name: Assistant name.
    • assistant_description: Assistant description which OpenAI will use for assistant context.
    • api_key: OpenAI API key.
    • model: OpenAI model.
    • tools: Function definitions. For now, in addition to creating functions, functions must be also defined in a format which OpenAI API can understand. Functions take one object argument which must be unpacked to extract arguments within each function. Hopefully this changes in the future.

The image used for running this code has some common tools installed which I use daily in my custom functions:

  • awscli
  • cloudquery
  • numpy
  • pandas
  • psycopg2-binary
  • SQLAlchemy

Build and use your own image with additional tools for whatever your functions need.

Need help?

I help organizations build data pipelines with AI integrations. If your organization needs help building or exploring solutions, feel free to reach me at artem at outermeasure.com. The general workflow is:

  1. Fine tune a curated model with proprietary data to perform tasks specific to your pipeline.
  2. Deploy the model in your cloud environment.
  3. Connect your pipeline to the deployment via an API.
  4. Iterate and improve the model.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmt-0.0.2.tar.gz (14.7 kB view details)

Uploaded Source

Built Distribution

llmt-0.0.2-py3-none-any.whl (14.4 kB view details)

Uploaded Python 3

File details

Details for the file llmt-0.0.2.tar.gz.

File metadata

  • Download URL: llmt-0.0.2.tar.gz
  • Upload date:
  • Size: 14.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for llmt-0.0.2.tar.gz
Algorithm Hash digest
SHA256 1910f4bb7db288b8e20622a614e050a472dac9a2e4054a9913eb38b594de8e9b
MD5 773fabeb7f144a74271daa7fa3c15ef5
BLAKE2b-256 49ed67647677e15dab9927105e9b9a81c749255b3c1fad52ac165d63989b03ae

See more details on using hashes here.

File details

Details for the file llmt-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: llmt-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 14.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for llmt-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e4a1d4faa22bd172a1aab8dfe8e4d26be5dc8e3094af080a73113aa2e24acbc7
MD5 3296beb3b6f35f52f4fbf675a0b35479
BLAKE2b-256 5d5ecc3378627ec015889496eb39fabbfa2b1cd124f6c7cbaf1bb50d1f294c07

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page