Skip to main content

A Python package that uses LangChain and LiteLLM to call large language model APIs easily

Project description

ChainLite Logo

ChainLite

ChainLite combines LangChain and LiteLLM to provide an easy-to-use and customizable interface for large language model applications.

* Logo is generated using DALL·E 3.

Installation

ChainLite has been tested with Python 3.10. To install, do the following:

  1. Install ChainLite via pip:

    pip install chainlite
    

    or

    pip install https://github.com/stanford-oval/chainlite.git
    
  2. Copy llm_config.yaml to your project and follow the instructions there to update it with your own configuration.

Usage

Before you can use Chainlite, you can call the following function to load the configuration file. If you don't, ChainLite will use llm_config.yaml in the current directory (the directory you are running your script from) by default.

from chainlite import load_config_file
load_config_file("./llm_config.yaml") # The path should be relative to the directory you run the script from, usually the root directory of your project

Make sure the corresponding API keys are set in environment variables with the name you specified in the configuration file, e.g. OPENAI_API_KEY etc.

Then you can use the following functions in your code:

llm_generation_chain(
    template_file: str,
    engine: str,
    max_tokens: int,
    temperature: float = 0.0,
    stop_tokens=None,
    top_p: float = 0.9,
    output_json: bool = False,
    template_blocks: list[tuple[str]]=None,
    keep_indentation: bool = False,
    progress_bar_desc: Optional[str] = None,
    additional_postprocessing_runnable: Runnable = None,
    tools: Optional[List[Callable]] = None,
    force_tool_calling: bool = False,
    return_top_logprobs: int = 0,
    bind_prompt_values: Dict = {},
) # returns a LangChain chain the accepts inputs and returns a string as output
load_config_from_file(config_file: str)
pprint_chain() # can be used to print inputs or outputs of a LangChain chain.
write_prompt_logs_to_file(log_file: Optional[str]) # writes all instructions, inputs and outputs of all your LLM API calls to a jsonl file. Good for debugging or collecting data using LLMs
get_total_cost() # returns the total cost of all LLM API calls you have made. Resets each time you run your code.

Full Example

joke.prompt with a 1-shot example:

# instruction
Tell a joke about the input topic. The format of the joke should be a question and response, separated by a line break.
{# This is a comment, and will be ignored anywhere in a .prompt file. Other than block definitions and comments, '#' is allowed and is treated as a normal character.  #}

# distillation instruction
Tell a joke.

# input
Physics

# output
Why don't scientists trust atoms?
Because they make up everything!

# input
{{ topic }}

main.py:

from chainlite import load_config_file
load_config_file("./chainlite_config.yaml")

async def tell_joke(topic: str):
    response = await llm_generation_chain(
        template_file="joke.prompt",
        engine="gpt-35-turbo",
        max_tokens=100,
    ).ainvoke({"topic": topic})
    print(response)

asyncio.run(tell_joke("Life as a PhD student")) # prints "Why did the PhD student bring a ladder to the library?\nTo take their research to the next level!"
write_prompt_logs_to_file("llm_input_outputs.jsonl")

Then you will have llm_input_outputs.jsonl:

{"template_name": "joke.prompt", "instruction": "Tell a joke.", "input": "Life as a PhD student", "output": "Why did the PhD student bring a ladder to the library?\nTo take their research to the next level!"}

For more examples, see tests/test_llm_generate.py

Configuration

The chainlite_config.yaml file allows you to customize the behavior of ChainLite. Modify the file to set your preferences for the LangChain and LiteLLM integrations.

Syntax Highlighting

If you are using VSCode, you can install this extension and switch .prompt files to use the "Jinja Markdown" syntax highlighting.

Contributing

We welcome contributions! Please follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch for your feature or bugfix.
  3. Commit your changes.
  4. Push the branch to your forked repository.
  5. Create a pull request with a detailed description of your changes.

License

ChainLite is licensed under the Apache-2.0 License. See the LICENSE file for more information.

Contact

For any questions or inquiries, please open an issue on the GitHub Issues page.


Thank you for using ChainLite!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chainlite-0.3.2.tar.gz (29.4 kB view details)

Uploaded Source

Built Distribution

chainlite-0.3.2-py3-none-any.whl (27.8 kB view details)

Uploaded Python 3

File details

Details for the file chainlite-0.3.2.tar.gz.

File metadata

  • Download URL: chainlite-0.3.2.tar.gz
  • Upload date:
  • Size: 29.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for chainlite-0.3.2.tar.gz
Algorithm Hash digest
SHA256 08570c019d26ad4efd682e3071f67bcf2d73acdd4dc4b0818b8e5c7eeb7f9e1a
MD5 723a0dcd78112f46382b25679b1daa09
BLAKE2b-256 c3178bf9fa63515df6468d830f8123150803cc874500b97c8e44913d33ee106d

See more details on using hashes here.

File details

Details for the file chainlite-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: chainlite-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 27.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for chainlite-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4de8e29d7b898430a339fdfd4158d1e37c31222bb4c519381f4799b699a237eb
MD5 3760faaebd9017635488fa4a75ba4aa0
BLAKE2b-256 5611b1ecb7a712bbc7889fc2ad6aa8b54c06a08a5164fb3bdc07f7ec5a0d2fd4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page