Easily integrate LLMs into Python code.
Project description
magic-lamp
Create magic LLM-powered Python functions that return anything you ask for. Downloads and uses a local LLM.
Quickstart
pip install magic-lamp
Define a function with a description and a set of examples.
import magic_lamp
get_atoms = magic_lamp.Function(
"Break this molecule down into it's constituent atoms. Return as a set.",
examples=[
("water", {"hydrogen", "oxygen"}),
("glucose", {"carbon", "hydrogen", "oxygen"}),
],
)
print(get_atoms("ammonia")) # => {"nitrogen", "hydrogen"}
Functions can return any Python literal.
Configuring the LLM
By default, magic-lamp
downloads and runs a local LLM from Hugging Face. For more complex tasks, OpenAI models will perform better.
Using OpenAI
OPENAI_API_KEY
must be set in the environment. Pass in the name of a gpt-*
model to the function constructor.
import magic_lamp
format_number = magic_lamp.Function(
'Write this number out in words.',
examples=[
(1, "one"),
(35, "thirty-five"),
(15690, "fifteen thousand, six hundred ninety"),
],
model="gpt-4o-mini"
)
print(format_number(328745226793))
Links
- https://github.com/jackmpcollins/magentic - A similar concept but using decorators.
- https://github.com/abetlen/llama-cpp-python - Used by this library to run the local LLM.
- https://ai.meta.com/blog/meta-llama-3-1/ - Llama 3.1 8b is the default model.
- https://huggingface.co/bullerwins/Meta-Llama-3.1-8B-Instruct-GGUF - Uses these GGUFs by default.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
magic_lamp-0.4.0.tar.gz
(4.3 kB
view hashes)
Built Distribution
Close
Hashes for magic_lamp-0.4.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e612c61706c27982f452f3735af21a3b544f612ace6de3d850378f0b7e1ef023 |
|
MD5 | 8eda194786562dbc9906c82f167de792 |
|
BLAKE2b-256 | 0ab2c3e68804e6a9c761f0fd7360164c9a263ff95cccae0b26919e770d127de9 |