Skip to main content

Python SDK for Laminar AI

Project description

Laminar AI

This repo provides core for code generation, Laminar CLI, and Laminar SDK.

Quickstart

python3 -m venv .myenv
source .myenv/bin/activate  # or use your favorite env management tool

pip install lmnr

Features

  • Make Laminar endpoint calls from your Python code
  • Make Laminar endpoint calls that can run your own functions as tools
  • CLI to generate code from pipelines you build on Laminar
  • LaminarRemoteDebugger to execute your own functions while you test your flows in workshop

Making Laminar endpoint calls

After you are ready to use your pipeline in your code, deploy it in Laminar following the docs.

Once your pipeline is deployed, you can call it from Python in just a few lines.

Example use:

from lmnr import Laminar

l = Laminar('<YOUR_PROJECT_API_KEY>')
result = l.run(
    endpoint = 'my_endpoint_name',
    inputs = {'input_node_name': 'some_value'},
    # all environment variables
    env = {'OPENAI_API_KEY': 'sk-some-key'},
    # any metadata to attach to this run's trace
    metadata = {'session_id': 'your_custom_session_id'}
)

Resulting in:

>>> result
EndpointRunResponse(
    outputs={'output': {'value': [ChatMessage(role='user', content='hello')]}},
    # useful to locate your trace
    run_id='53b012d5-5759-48a6-a9c5-0011610e3669'
)

Making calls to pipelines that run your own logic

If your pipeline contains tool call nodes, they will be able to call your local code. The only difference is that you need to pass references to the functions you want to call right into our SDK.

Example use:

from lmnr import Laminar, NodeInput

# adding **kwargs is safer, in case an LLM produces more arguments than needed
def my_tool(arg1: string, arg2: string, **kwargs) -> NodeInput {
    return f'{arg1}&{arg2}'
}

l = Laminar('<YOUR_PROJECT_API_KEY>')
result = l.run(
    endpoint = 'my_endpoint_name',
    inputs = {'input_node_name': 'some_value'},
    # all environment variables
    env = {'OPENAI_API_KEY': '<YOUR_MODEL_PROVIDER_KEY>'},
    # any metadata to attach to this run's trace
    metadata = {'session_id': 'your_custom_session_id'},
    # specify as many tools as needed.
    # Each tool name must match tool node name in the pipeline
    tools=[my_tool]
)

LaminarRemoteDebugger

If your pipeline contains tool call nodes, they will be able to call your local code. If you want to test them from the Laminar workshop in your browser, you can attach to your locally running debugger.

Step by step instructions to use LaminarRemoteDebugger:

1. Create your pipeline with tool call nodes

Add tool calls to your pipeline; node names must match the functions you want to call.

2. Start LaminarRemoteDebugger in your code

Example:

from lmnr import LaminarRemoteDebugger, NodeInput

# adding **kwargs is safer, in case an LLM produces more arguments than needed
def my_tool(arg1: string, arg2: string, **kwargs) -> NodeInput:
    return f'{arg1}&{arg2}'

debugger = LaminarRemoteDebugger('<YOUR_PROJECT_API_KEY>', [my_tool])
session_id = debugger.start()  # the session id will also be printed to console

This will establish a connection with Laminar API and allow for the pipeline execution to call your local functions.

3. Link lmnr.ai workshop to your debugger

Set up DEBUGGER_SESSION_ID environment variable in your pipeline.

4. Run and experiment

You can run as many sessions as you need, experimenting with your flows.

CLI for code generation

Basic usage

lmnr pull <pipeline_name> <pipeline_version_name> --project-api-key <PROJECT_API_KEY>

Note that lmnr CLI command will only be available from within the virtual environment where you have installed the package.

To import your pipeline

# submodule with the name of your pipeline will be generated in lmnr_engine.pipelines
from lmnr_engine.pipelines.my_custom_pipeline import MyCustomPipeline


pipeline = MyCustomPipeline()
res = pipeline.run(
    inputs={
        "instruction": "Write me a short linkedin post about a dev tool for LLM developers"
    },
    env={
        "OPENAI_API_KEY": <OPENAI_API_KEY>,
    }
)
print(f"Pipeline run result:\n{res}")

Current functionality

  • Supports graph generation for graphs with the following nodes: Input, Output, LLM, Router, Code.
  • For LLM nodes, it only supports OpenAI and Anthropic models. Structured output in LLM nodes will be supported soon.

PROJECT_API_KEY

Read more here on how to get PROJECT_API_KEY.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lmnr-0.2.13.tar.gz (17.6 kB view details)

Uploaded Source

Built Distribution

lmnr-0.2.13-py3-none-any.whl (23.0 kB view details)

Uploaded Python 3

File details

Details for the file lmnr-0.2.13.tar.gz.

File metadata

  • Download URL: lmnr-0.2.13.tar.gz
  • Upload date:
  • Size: 17.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.7 Darwin/23.0.0

File hashes

Hashes for lmnr-0.2.13.tar.gz
Algorithm Hash digest
SHA256 a05a1ec213b2f440f78656102a976d312c2e7c93c467d86edb7780cad01fd3e8
MD5 5651420849693b5501769c6d84deceb8
BLAKE2b-256 7924e0f0ef78b0450cf717212e8b4edac3baebef4010d5c5e14c51569d94d502

See more details on using hashes here.

File details

Details for the file lmnr-0.2.13-py3-none-any.whl.

File metadata

  • Download URL: lmnr-0.2.13-py3-none-any.whl
  • Upload date:
  • Size: 23.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.7 Darwin/23.0.0

File hashes

Hashes for lmnr-0.2.13-py3-none-any.whl
Algorithm Hash digest
SHA256 e7485047df88408e5f4a2d9f69cbce7054295e63c3431bfa378d0bac295748d7
MD5 920589cb2da299a26d0a92560fa277f6
BLAKE2b-256 e35371bffa024347d10d921cb7b1279ee2b8d54c4702b05b5243655e3486173a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page