Skip to main content

Python SDK for Laminar AI

Project description

Laminar Python

Python SDK for Laminar.

Laminar is an open-source platform for engineering LLM products. Trace, evaluate, annotate, and analyze LLM data. Bring LLM applications to production with confidence.

Check our open-source repo and don't forget to star it ⭐

PyPI - Version PyPI - Downloads PyPI - Python Version

Quickstart

First, install the package:

pip install lmnr

And then in the code

from lmnr import Laminar as L

L.initialize(project_api_key="<PROJECT_API_KEY>")

This will automatically instrument most of the LLM, Vector DB, and related calls with OpenTelemetry-compatible instrumentation.

Note that you need to only initialize Laminar once in your application.

Instrumentation

Manual instrumentation

To instrument any function in your code, we provide a simple @observe() decorator. This can be useful if you want to trace a request handler or a function which combines multiple LLM calls.

import os
from openai import OpenAI
from lmnr import Laminar as L, Instruments

L.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"])

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def poem_writer(topic: str):
    prompt = f"write a poem about {topic}"
    messages = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": prompt},
    ]

    # OpenAI calls are still automatically instrumented
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
    )
    poem = response.choices[0].message.content

    return poem

@observe()
def generate_poems():
    poem1 = poem_writer(topic="laminar flow")
    L.event("is_poem_generated", True)
    poem2 = poem_writer(topic="turbulence")
    L.event("is_poem_generated", True)
    poems = f"{poem1}\n\n---\n\n{poem2}"
    return poems

Also, you can use Laminar.start_as_current_span if you want to record a chunk of your code using with statement.

def handle_user_request(topic: str):
    with L.start_as_current_span(name="poem_writer", input=topic):
        ...

        poem = poem_writer(topic=topic)
        
        ...
        
        # while within the span, you can attach laminar events to it
        L.event("is_poem_generated", True)

        # Use set_span_output to record the output of the span
        L.set_span_output(poem)

Automatic instrumentation

Laminar allows you to automatically instrument majority of the most popular LLM, Vector DB, database, requests, and other libraries.

If you want to automatically instrument a default set of libraries, then simply do NOT pass instruments argument to .initialize(). See the full list of available instrumentations in the enum.

If you want to automatically instrument only specific LLM, Vector DB, or other calls with OpenTelemetry-compatible instrumentation, then pass the appropriate instruments to .initialize(). For example, if you want to only instrument OpenAI and Anthropic, then do the following:

from lmnr import Laminar as L, Instruments

L.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"], instruments={Instruments.OPENAI, Instruments.ANTHROPIC})

If you want to fully disable any kind of autoinstrumentation, pass an empty set as instruments=set() to .initialize().

Autoinstrumentations are provided by Traceloop's OpenLLMetry.

Sending events

You can send events in two ways:

  • .event(name, value) – for a pre-defined event with one of possible values.
  • .evaluate_event(name, evaluator, data) – for an event that is evaluated by evaluator pipeline based on the data.

Note that to run an evaluate event, you need to crate an evaluator pipeline and create a target version for it.

Read our docs to learn more about event types and how they are created and evaluated.

Example

from lmnr import Laminar as L
# ...
poem = response.choices[0].message.content

# this will register True or False value with Laminar
L.event("topic alignment", topic in poem)

# this will run the pipeline `check_wordy` with `poem` set as the value
# of `text_input` node, and write the result as an event with name
# "excessive_wordiness"
L.evaluate_event("excessive_wordiness", "check_wordy", {"text_input": poem})

Evaluations

Quickstart

Install the package:

pip install lmnr

Create a file named my_first_eval.py with the following code:

from lmnr import evaluate

def write_poem(data):
    return f"This is a good poem about {data['topic']}"

def contains_poem(output, target):
    return 1 if output in target['poem'] else 0

# Evaluation data
data = [
    {"data": {"topic": "flowers"}, "target": {"poem": "This is a good poem about flowers"}},
    {"data": {"topic": "cars"}, "target": {"poem": "I like cars"}},
]

evaluate(
    data=data,
    executor=write_poem,
    evaluators={
        "containsPoem": contains_poem
    },
    group_id="my_first_feature"
)

Run the following commands:

export LMNR_PROJECT_API_KEY=<YOUR_PROJECT_API_KEY>  # get from Laminar project settings
lmnr eval my_first_eval.py  # run in the virtual environment where lmnr is installed

Visit the URL printed in the console to see the results.

Overview

Bring rigor to the development of your LLM applications with evaluations.

You can run evaluations locally by providing executor (part of the logic used in your application) and evaluators (numeric scoring functions) to evaluate function.

evaluate takes in the following parameters:

  • data – an array of EvaluationDatapoint objects, where each EvaluationDatapoint has two keys: target and data, each containing a key-value object. Alternatively, you can pass in dictionaries, and we will instantiate EvaluationDatapoints with pydantic if possible
  • executor – the logic you want to evaluate. This function must take data as the first argument, and produce any output. It can be both a function or an async function.
  • evaluators – Dictionary which maps evaluator names to evaluators. Functions that take output of executor as the first argument, target as the second argument and produce a numeric scores. Each function can produce either a single number or dict[str, int|float] of scores. Each evaluator can be both a function or an async function.
  • name – optional name for the evaluation. Automatically generated if not provided.

* If you already have the outputs of executors you want to evaluate, you can specify the executor as an identity function, that takes in data and returns only needed value(s) from it.

Read docs to learn more about evaluations.

Laminar pipelines as prompt chain managers

You can create Laminar pipelines in the UI and manage chains of LLM calls there.

After you are ready to use your pipeline in your code, deploy it in Laminar by selecting the target version for the pipeline.

Once your pipeline target is set, you can call it from Python in just a few lines.

Example use:

from lmnr import Laminar as L

L.initialize('<YOUR_PROJECT_API_KEY>', instruments=set())

result = l.run(
    pipeline = 'my_pipeline_name',
    inputs = {'input_node_name': 'some_value'},
    # all environment variables
    env = {'OPENAI_API_KEY': 'sk-some-key'},
)

Resulting in:

>>> result
PipelineRunResponse(
    outputs={'output': {'value': [ChatMessage(role='user', content='hello')]}},
    # useful to locate your trace
    run_id='53b012d5-5759-48a6-a9c5-0011610e3669'
)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lmnr-0.4.16b0.tar.gz (47.9 kB view details)

Uploaded Source

Built Distribution

lmnr-0.4.16b0-py3-none-any.whl (65.4 kB view details)

Uploaded Python 3

File details

Details for the file lmnr-0.4.16b0.tar.gz.

File metadata

  • Download URL: lmnr-0.4.16b0.tar.gz
  • Upload date:
  • Size: 47.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.9.6 Darwin/24.0.0

File hashes

Hashes for lmnr-0.4.16b0.tar.gz
Algorithm Hash digest
SHA256 ef672bd61a984f0312bbdd847b067bfdd131b8bfaad3aeefbd36550a65113e90
MD5 347cc49ea3dbfb0450cdb057f01e5b5b
BLAKE2b-256 56a1005eff4d52a2f6bcb22285397df1e4d3209f1e60971a4a579d203a77fd4c

See more details on using hashes here.

File details

Details for the file lmnr-0.4.16b0-py3-none-any.whl.

File metadata

  • Download URL: lmnr-0.4.16b0-py3-none-any.whl
  • Upload date:
  • Size: 65.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.9.6 Darwin/24.0.0

File hashes

Hashes for lmnr-0.4.16b0-py3-none-any.whl
Algorithm Hash digest
SHA256 647c615fe468eb152e293c4571359e2982958fbbb549ff409d4acd5700941a67
MD5 379dd6c124bd42b20798d86462683cc7
BLAKE2b-256 667c8af065009b6a0ae62e04269df4208e199e92c43fd54e5c9dae1078055cc4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page