Skip to main content

Quick way to access LLM output in CLI and run Code Agents

Project description

Ci Coverage Status

LLM Wrapper CLI

LLM Wrapper CLI is a powerful tool designed to simplify interactions with language models API like Hugging Face and OpenAI. Whether you need to execute code snippets, analyze documents, or generate content, LLM Wrapper CLI provides a seamless command-line interface. Key features include:

  • 📜 Custom System Prompts: Quickly configure language model outputs with custom prompts.
  • 🌐 File and URL Processing: Process files and URLs with various format including youtube vidoes and excel files.
  • 🏠 Self-Hosting Support: Easily connect to self-hosted language models using Ollama or similar services.
  • ⚙️ Flexible Configuration: Configure settings either through command-line arguments, configuration files, or environment variables.

Quick Examples

Here are a few examples to give you a taste of what you can do with LLM Wrapper CLI:

# Convert date string to datetime object
$ llmc python convert "29 June, 1895" to datetime
from datetime import datetime
dt = datetime.strptime('29 June, 1895', '%d %B, %Y')

# Grep for Python files in a directory hierarchy
$ llmc bash grep only python files in folder hierarchy
grep -r --include "*.py" "pattern" folder/

# Explain the functionality of this package
$ llmc explain what this package does in 2 sentences -i $(find src -name "*.py")
This package provides a command-line interface for interacting with language models
like Hugging Face and OpenAI. It supports managing chat sessions,
loading custom prompts, and performing file operations via a code agent.

Installation

pip install llm-wrapper-cli

Using HuggingFace API

The fastest way to get started with llmc is with HuggingFace. Simply follow these instructions to create a token, then export it in your shell like so:

export HF_TOKEN=[...]

And you're ready to go

Using Ollama/OpenAI

If you're already using Ollama, one way to configure llmc to use is by creating the following file:

$ cat ~/.llmc/conf.yml
provider: openai
openai_url: "http://localhost:11434/v1"
openai_model: llama3.2 # Put your favourite model here

You can find more ways to configure llmc in the Configuration section.

Features

Code agent

This projects is based off of Huggingface's smolagents package, this package allows for LLMs to write their own code, execute it, and use the result to write further code. In the context of ths project, this can be used for:

  • Write code/tests, and debug it on its own.
  • Do filesystem operations such as file moving/renaming/editing
  • Provide up to date information by querrying the web for information instead of its own memory.

The agent function is activated by using the --agent option of llmc, example runs can be found in the agents section.

Seamless system prompts

The first word of the query is used to check against the available prompts, this allows for easy configuration of the llm output, without the need to maintain multiple sessions:

$ cat ~/.llmc/prompts/bash.md
You are a helpful chatbot, expert in shell commands, you answer queries
about shell with only the commands, without markdown backticks or explanations,
unless specifically requested for instance:

User: Give me a command that list the size of all files in a folder.
You: du -sh folder/*

$ llmc bash ssh into a docker container
docker exec -it container_id_or_name /bin/bash

This prompts and others are provided by default, you can find their definition in the Prompts section.

Providing files and URLs

You can provide files or URLs using the -i option, those inputs are converted to markdown thanks to Microsoft's markitdown package, which handles a truly impressive amount of format.

Examples

Passing code files
$ llmc python histogram of a list >> hist.py
$ llmc python test histogram function -i hist.py
def test_histogram():
    assert histogram([1, 2, 2, 3, 3, 3, 4, 4, 4, 4]) == {1: 1, 2: 2, 3: 3, 4: 4}
    assert histogram(['a', 'b', 'a', 'c', 'b', 'c', 'c']) == {'a': 2, 'b': 2, 'c': 3}
    assert histogram([]) == {}
    assert histogram([1]) == {1: 1}
    assert histogram([1, 1, 1, 1]) == {1: 4}

test_histogram()

To pass all python files in the src folder: llmc -i $(find src -name "*.py")

Summarizing youtube videos
$ llmc summarize in a few words -i https://www.youtube.com/watch\?v\=BKorP55Aqvg
Short, humorous comedy sketch satirizing corporate meeting dynamics and an engineer's
frustration with ambiguous instructions.

Credits

This package was created with Cookiecutter and the CalOmnie/cookiecutter-pypackage project template.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_wrapper_cli-1.0.2.tar.gz (14.4 kB view details)

Uploaded Source

Built Distribution

llm_wrapper_cli-1.0.2-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file llm_wrapper_cli-1.0.2.tar.gz.

File metadata

  • Download URL: llm_wrapper_cli-1.0.2.tar.gz
  • Upload date:
  • Size: 14.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for llm_wrapper_cli-1.0.2.tar.gz
Algorithm Hash digest
SHA256 f23bde463548d5b8d6543f887f9fadb9d7c7b06dbd9385fdedc50d3501118a32
MD5 6e0b8abb869f6a85419cdef0e5b152de
BLAKE2b-256 a89465cfc32a1779fe941f2dc4f0a4821fb5e06293f8a74903c2d57765d467da

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_wrapper_cli-1.0.2.tar.gz:

Publisher: publish_to_pypi.yml on CalOmnie/llm-wrapper-cli

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llm_wrapper_cli-1.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_wrapper_cli-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a3dea1dc780bc264c358f914c31b3246c51473754148c8a75785d9236d20a97e
MD5 08e88ead254897fc34fa2020eaf36bfa
BLAKE2b-256 0e341e5d9a3fc21a59343668b85058a9dad17775c7c620949b26810764c61185

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_wrapper_cli-1.0.2-py3-none-any.whl:

Publisher: publish_to_pypi.yml on CalOmnie/llm-wrapper-cli

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page