Python SDK for Laminar AI
Project description
Laminar AI
This repo provides core for code generation, Laminar CLI, and Laminar SDK.
Quickstart
python3 -m venv .myenv
source .myenv/bin/activate # or use your favorite env management tool
pip install lmnr
Features
- Make Laminar endpoint calls from your Python code
- Make Laminar endpoint calls that can run your own functions as tools
- CLI to generate code from pipelines you build on Laminar or execute your own functions while you test your flows in workshop
Making Laminar endpoint calls
After you are ready to use your pipeline in your code, deploy it in Laminar following the docs.
Once your pipeline is deployed, you can call it from Python in just a few lines.
Example use:
from lmnr import Laminar
l = Laminar('<YOUR_PROJECT_API_KEY>')
result = l.run(
endpoint = 'my_endpoint_name',
inputs = {'input_node_name': 'some_value'},
# all environment variables
env = {'OPENAI_API_KEY': 'sk-some-key'},
# any metadata to attach to this run's trace
metadata = {'session_id': 'your_custom_session_id'}
)
Resulting in:
>>> result
EndpointRunResponse(
outputs={'output': {'value': [ChatMessage(role='user', content='hello')]}},
# useful to locate your trace
run_id='53b012d5-5759-48a6-a9c5-0011610e3669'
)
Making calls to pipelines that run your own logic
If your pipeline contains tool call nodes, they will be able to call your local code. The only difference is that you need to pass references to the functions you want to call right into our SDK.
Example use:
from lmnr import Laminar, NodeInput
# adding **kwargs is safer, in case an LLM produces more arguments than needed
def my_tool(arg1: string, arg2: string, **kwargs) -> NodeInput {
return f'{arg1}&{arg2}'
}
l = Laminar('<YOUR_PROJECT_API_KEY>')
result = l.run(
endpoint = 'my_endpoint_name',
inputs = {'input_node_name': 'some_value'},
# all environment variables
env = {'OPENAI_API_KEY': '<YOUR_MODEL_PROVIDER_KEY>'},
# any metadata to attach to this run's trace
metadata = {'session_id': 'your_custom_session_id'},
# specify as many tools as needed.
# Each tool name must match tool node name in the pipeline
tools=[my_tool]
)
LaminarRemoteDebugger
If your pipeline contains local call nodes, they will be able to call code right on your machine.
Step by step instructions to connect to Laminar:
1. Create your pipeline with function call nodes
Add function calls to your pipeline; these are signature definitions of your functions
2. Implement the functions
At the root level, create a file: pipeline.py
Annotate functions with the same name.
Example:
from lmnr import Pipeline
lmnr = Pipeline()
@lmnr.func("foo") # the node in the pipeline is called foo and has one parameter arg
def custom_logic(arg: str) -> str:
return arg * 10
3. Link lmnr.ai workshop to your machine
- At the root level, create a
.env
file if not already - In project settings, create or copy a project api key.
- Add an entry in
.env
with:LMNR_PROJECT_API_KEY=s0meKey...
- In project settings create or copy a dev session. These are your individual sessions.
- Add an entry in
.env
with:LMNR_DEV_SESSION_ID=01234567-89ab-cdef-0123-4567890ab
4. Run the dev environment
lmnr dev
This will start a session, try to persist it, and reload the session on files change.
CLI for code generation
Basic usage
lmnr pull <pipeline_name> <pipeline_version_name> --project-api-key <PROJECT_API_KEY>
Note that lmnr
CLI command will only be available from within the virtual environment
where you have installed the package.
To import your pipeline
# submodule with the name of your pipeline will be generated in lmnr_engine.pipelines
from lmnr_engine.pipelines.my_custom_pipeline import MyCustomPipeline
pipeline = MyCustomPipeline()
res = pipeline.run(
inputs={
"instruction": "Write me a short linkedin post about a dev tool for LLM developers"
},
env={
"OPENAI_API_KEY": <OPENAI_API_KEY>,
}
)
print(f"Pipeline run result:\n{res}")
Current functionality
- Supports graph generation for graphs with the following nodes: Input, Output, LLM, Router, Code.
- For LLM nodes, it only supports OpenAI and Anthropic models. Structured output in LLM nodes will be supported soon.
PROJECT_API_KEY
Read more here on how to get PROJECT_API_KEY
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file lmnr-0.2.14.tar.gz
.
File metadata
- Download URL: lmnr-0.2.14.tar.gz
- Upload date:
- Size: 18.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.9.6 Darwin/23.5.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2747698559ef3fb40390c278c8334d5bec79c58d8f8f8d2e84ef873dd5979364 |
|
MD5 | 88f2969aa63e5aa7b6c8a25dab5039ff |
|
BLAKE2b-256 | dd8f374fd2af0c1aac84ea2357db890dfd6b168fb518249ec941b5461ec3af9c |
File details
Details for the file lmnr-0.2.14-py3-none-any.whl
.
File metadata
- Download URL: lmnr-0.2.14-py3-none-any.whl
- Upload date:
- Size: 24.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.9.6 Darwin/23.5.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e24c1264e2c9270398924059b6df2fdf164131cf847ab12de6dff326009acb49 |
|
MD5 | a3e25bf0bf8a7bed79f1cc2abef662a2 |
|
BLAKE2b-256 | f3b03d450ed82b1a3025559c0120e1585fdc3e4535b8d9a6ee2f3e187910b790 |