Python SDK for Laminar AI
Project description
Laminar Python
Python SDK for Laminar.
Laminar is an open-source platform for engineering LLM products. Trace, evaluate, annotate, and analyze LLM data. Bring LLM applications to production with confidence.
Check our open-source repo and don't forget to star it ⭐
Quickstart
First, install the package, specifying the instrumentations you want to use.
For example, to install the package with OpenAI and Anthropic instrumentations:
pip install 'lmnr[anthropic,openai]'
To install all possible instrumentations, use the following command:
pip install 'lmnr[all]'
Initialize Laminar in your code:
from lmnr import Laminar
Laminar.initialize(project_api_key="<PROJECT_API_KEY>")
Note that you need to only initialize Laminar once in your application.
Instrumentation
Manual instrumentation
To instrument any function in your code, we provide a simple @observe()
decorator.
This can be useful if you want to trace a request handler or a function which combines multiple LLM calls.
import os
from openai import OpenAI
from lmnr import Laminar
Laminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"])
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
def poem_writer(topic: str):
prompt = f"write a poem about {topic}"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt},
]
# OpenAI calls are still automatically instrumented
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
)
poem = response.choices[0].message.content
return poem
@observe()
def generate_poems():
poem1 = poem_writer(topic="laminar flow")
poem2 = poem_writer(topic="turbulence")
poems = f"{poem1}\n\n---\n\n{poem2}"
return poems
Also, you can use Laminar.start_as_current_span
if you want to record a chunk of your code using with
statement.
def handle_user_request(topic: str):
with Laminar.start_as_current_span(name="poem_writer", input=topic):
poem = poem_writer(topic=topic)
# Use set_span_output to record the output of the span
Laminar.set_span_output(poem)
Automatic instrumentation
Laminar allows you to automatically instrument majority of the most popular LLM, Vector DB, database, requests, and other libraries.
If you want to automatically instrument a default set of libraries, then simply do NOT pass instruments
argument to .initialize()
.
See the full list of available instrumentations in the enum.
If you want to automatically instrument only specific LLM, Vector DB, or other
calls with OpenTelemetry-compatible instrumentation, then pass the appropriate instruments to .initialize()
.
For example, if you want to only instrument OpenAI and Anthropic, then do the following:
from lmnr import Laminar, Instruments
Laminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"], instruments={Instruments.OPENAI, Instruments.ANTHROPIC})
If you want to fully disable any kind of autoinstrumentation, pass an empty set as instruments=set()
to .initialize()
.
Autoinstrumentations are provided by Traceloop's OpenLLMetry.
Evaluations
Quickstart
Install the package:
pip install lmnr
Create a file named my_first_eval.py
with the following code:
from lmnr import evaluate
def write_poem(data):
return f"This is a good poem about {data['topic']}"
def contains_poem(output, target):
return 1 if output in target['poem'] else 0
# Evaluation data
data = [
{"data": {"topic": "flowers"}, "target": {"poem": "This is a good poem about flowers"}},
{"data": {"topic": "cars"}, "target": {"poem": "I like cars"}},
]
evaluate(
data=data,
executor=write_poem,
evaluators={
"containsPoem": contains_poem
},
group_id="my_first_feature"
)
Run the following commands:
export LMNR_PROJECT_API_KEY=<YOUR_PROJECT_API_KEY> # get from Laminar project settings
lmnr eval my_first_eval.py # run in the virtual environment where lmnr is installed
Visit the URL printed in the console to see the results.
Overview
Bring rigor to the development of your LLM applications with evaluations.
You can run evaluations locally by providing executor (part of the logic used in your application) and evaluators (numeric scoring functions) to evaluate
function.
evaluate
takes in the following parameters:
data
– an array ofEvaluationDatapoint
objects, where eachEvaluationDatapoint
has two keys:target
anddata
, each containing a key-value object. Alternatively, you can pass in dictionaries, and we will instantiateEvaluationDatapoint
s with pydantic if possibleexecutor
– the logic you want to evaluate. This function must takedata
as the first argument, and produce any output. It can be both a function or anasync
function.evaluators
– Dictionary which maps evaluator names to evaluators. Functions that take output of executor as the first argument,target
as the second argument and produce a numeric scores. Each function can produce either a single number ordict[str, int|float]
of scores. Each evaluator can be both a function or anasync
function.name
– optional name for the evaluation. Automatically generated if not provided.group_id
– optional group name for the evaluation. Evaluations within the same group can be compared visually side-by-side
* If you already have the outputs of executors you want to evaluate, you can specify the executor as an identity function, that takes in data
and returns only needed value(s) from it.
Read the docs to learn more about evaluations.
Laminar pipelines as prompt chain managers
You can create Laminar pipelines in the UI and manage chains of LLM calls there.
After you are ready to use your pipeline in your code, deploy it in Laminar by selecting the target version for the pipeline.
Once your pipeline target is set, you can call it from Python in just a few lines.
Example use:
from lmnr import Laminar
Laminar.initialize('<YOUR_PROJECT_API_KEY>', instruments=set())
result = Laminar.run(
pipeline = 'my_pipeline_name',
inputs = {'input_node_name': 'some_value'},
# all environment variables
env = {'OPENAI_API_KEY': 'sk-some-key'},
)
Resulting in:
>>> result
PipelineRunResponse(
outputs={'output': {'value': [ChatMessage(role='user', content='hello')]}},
# useful to locate your trace
run_id='53b012d5-5759-48a6-a9c5-0011610e3669'
)
Semantic search
You can perform a semantic search on a dataset in Laminar by calling Laminar.semantic_search
.
response = Laminar.semantic_search(
query="Greatest Chinese architectural wonders",
dataset_id=uuid.UUID("413f8404-724c-4aa4-af16-714d84fd7958"),
)
Read more about indexing and semantic search.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file lmnr-0.4.50.tar.gz
.
File metadata
- Download URL: lmnr-0.4.50.tar.gz
- Upload date:
- Size: 36.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c65806a9e810edc2564054792192aca00560d45d19c8edd4fd39ccce12b6a990 |
|
MD5 | c17c8bf188b1ddac90ea34b829384596 |
|
BLAKE2b-256 | 819c7ca54f0050340ffbaee02b8347b6a6368529dfab73985ce9b4666659508b |
Provenance
The following attestation bundles were made for lmnr-0.4.50.tar.gz
:
Publisher:
python-publish.yml
on lmnr-ai/lmnr-python
-
Statement:
- Statement type:
https://in-toto.io/Statement/v1
- Predicate type:
https://docs.pypi.org/attestations/publish/v1
- Subject name:
lmnr-0.4.50.tar.gz
- Subject digest:
c65806a9e810edc2564054792192aca00560d45d19c8edd4fd39ccce12b6a990
- Sigstore transparency entry: 157241088
- Sigstore integration time:
- Permalink:
lmnr-ai/lmnr-python@04fdfe920cd6421ae3904e1cb2c4322b9ad147d0
- Branch / Tag:
refs/heads/main
- Owner: https://github.com/lmnr-ai
- Access:
public
- Token Issuer:
https://token.actions.githubusercontent.com
- Runner Environment:
github-hosted
- Publication workflow:
python-publish.yml@04fdfe920cd6421ae3904e1cb2c4322b9ad147d0
- Trigger Event:
push
- Statement type:
File details
Details for the file lmnr-0.4.50-py3-none-any.whl
.
File metadata
- Download URL: lmnr-0.4.50-py3-none-any.whl
- Upload date:
- Size: 41.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c9b04a17a508033d9a4310725711c0eaa87448cccea9d2c1dc6d673c20f658bc |
|
MD5 | 711bdb0c2c766480f66618db149905c4 |
|
BLAKE2b-256 | 5d09d4f4ac2d3fa44e71018874dde3ba7c3ad6e19252f2b96b7ef28799b48bfc |
Provenance
The following attestation bundles were made for lmnr-0.4.50-py3-none-any.whl
:
Publisher:
python-publish.yml
on lmnr-ai/lmnr-python
-
Statement:
- Statement type:
https://in-toto.io/Statement/v1
- Predicate type:
https://docs.pypi.org/attestations/publish/v1
- Subject name:
lmnr-0.4.50-py3-none-any.whl
- Subject digest:
c9b04a17a508033d9a4310725711c0eaa87448cccea9d2c1dc6d673c20f658bc
- Sigstore transparency entry: 157241089
- Sigstore integration time:
- Permalink:
lmnr-ai/lmnr-python@04fdfe920cd6421ae3904e1cb2c4322b9ad147d0
- Branch / Tag:
refs/heads/main
- Owner: https://github.com/lmnr-ai
- Access:
public
- Token Issuer:
https://token.actions.githubusercontent.com
- Runner Environment:
github-hosted
- Publication workflow:
python-publish.yml@04fdfe920cd6421ae3904e1cb2c4322b9ad147d0
- Trigger Event:
push
- Statement type: