Skip to main content

A Python library for using Large Language Models (LLMs) to fill out multiple-choice questionnaires about chat conversations.

Project description

LLM Multiple Choice

A Python library for having an LLM fill out a multiple-choice questionnaire about the current state of a chat.

Features

  • Composible with any LLM provider -- this library generates LLM prompts and validates responses, but leaves the actual LLM calls to you.
  • Flexible questionnaire structure.
  • Simple API for using the questionnaire results in code.

Installation

You can install the library using pip:

pip install llm-multiple-choice

If you're using Poetry:

poetry add llm-multiple-choice

Usage

This library helps you create multiple-choice questionnaires for LLMs to fill out.

Creating a Questionnaire

from llm_multiple_choice import ChoiceManager, DisplayFormat

# Create a questionnaire
manager = ChoiceManager()

# Add a section with choices
section = manager.add_section("Assess the sentiment of the message.")
positive = section.add_choice("The message expresses positive sentiment.")
neutral = section.add_choice("The message is neutral in sentiment.")
negative = section.add_choice("The message expresses negative sentiment.")

# Get the prompt to send to your LLM
prompt = manager.prompt_for_choices(DisplayFormat.MARKDOWN)

Processing LLM Responses

The library enforces these rules for LLM responses:

  • Must contain only numbers corresponding to valid choices
  • Numbers must be separated by commas
  • Each number can only appear once
  • Cannot be empty

Process the response:

try:
    choices = manager.validate_choices_response(llm_response)
    # Check which choices were selected
    if choices.has(choice1):
        print("Choice 1 was selected")
except InvalidChoicesResponseError as e:
    print(f"Invalid response: {e}")

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Setting Up for Development

To set up the project for development:

  1. Clone the repository:

    git clone https://github.com/deansher/llm-multiple-choice.git
    
  2. Navigate to the project directory:

    cd llm-multiple-choice
    
  3. Install dependencies using Poetry:

    poetry install
    

    This will install all the required packages in a virtual environment.

You can either activate the virtual environment in a shell by running poetry shell or run commands directly using poetry run <command>.

Editing in VSCode

To ensure VSCode uses the correct Python interpreter from the Poetry environment:

  1. Open the Command Palette (Ctrl+Shift+P or Cmd+Shift+P on Mac).
  2. Select Python: Select Interpreter.
  3. Choose the interpreter that corresponds to the project's virtual environment. It should be listed with the path to .venv.

If the virtual environment is not listed, you may need to refresh the interpreters or specify the path manually.

Running Tests

poetry run pytest

Adding Dependencies

To add a new dependency to the project:

  • For regular dependencies:

    poetry add <package_name>
    
  • For development dependencies (e.g., testing tools):

    poetry add --group dev <package_name>
    

This updates the pyproject.toml and poetry.lock files accordingly.

Release Process

This project uses GitHub Actions for automated testing and publishing to PyPI.

Making a Release

  1. Update version in pyproject.toml
  2. Create and push a new tag:
    git tag v0.1.0
    git push origin v0.1.0
    
  3. GitHub Actions will automatically:
    • Run all tests and type checking
    • Build the package
    • Publish to PyPI if all checks pass

Manual Publishing

If needed, you can publish manually using the build script:

# Publish to TestPyPI
./scripts/build_and_publish.sh

# Publish to production PyPI
./scripts/build_and_publish.sh --production

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_multiple_choice-0.1.0.tar.gz (6.1 kB view details)

Uploaded Source

Built Distribution

llm_multiple_choice-0.1.0-py3-none-any.whl (8.0 kB view details)

Uploaded Python 3

File details

Details for the file llm_multiple_choice-0.1.0.tar.gz.

File metadata

  • Download URL: llm_multiple_choice-0.1.0.tar.gz
  • Upload date:
  • Size: 6.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.12.7 Linux/6.5.0-1025-azure

File hashes

Hashes for llm_multiple_choice-0.1.0.tar.gz
Algorithm Hash digest
SHA256 fa402fe02130f65e4208efdfc21864d17a6c5740ee8dc67750603d5b9b319d33
MD5 305cdf700ae71d903adbd5a132cc0c73
BLAKE2b-256 538751d882536acb9a74a136075fa2bae81f99f1357213e6c7acef6344f9b77d

See more details on using hashes here.

File details

Details for the file llm_multiple_choice-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_multiple_choice-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d35adbc0566aaca14a7e640adf31ce0b6711aad165607e329cb6fbed6193375e
MD5 08f267c001bf5ae0f31c9ef8308ab64c
BLAKE2b-256 59e2c4c223cce44901e4122042f747d1a812d4d75431bdecdcf87dac53f5002a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page