Skip to main content

LENS is a lightweight webserver designed to use Large Language Models as tool for data exploration in human interactions.

Project description

Description

LENS: Learning and Exploring through Natural language Systems is a lightweight webserver designed to use Large Language Models as tool for data exploration in human interactions. LENS ist best used together with NOVA and DISCOVER.

Usage

LENS currently provides support for the OpenAI and the OLLAMA. So before you start make sure to either have access to an OpenAI API key or to set up a local OLLAMA server.

To install lens install python > 3.9 and run the following command in your terminal

pip install lens

Create a file named lens.env at suitable location. Copy + Paste the contents from the environment section to the newly created environment file and adapt the contents accordingly. Run LENS using the following command:

lens --env /path/to/lens.env

Environment

Example for .env file

# server
LENS_HOST = 127.0.0.1
LENS_PORT = 1337
LENS_CACHE_DUR = 600 #results pf /models are cached for the specified amount in seconds

# model
DEFAULT_MODEL = llama3.1

# API_BASES
API_BASE_OLLAMA = http://127.0.0.1:11434
API_BASE_OLLAMA_CHAT = http://127.0.0.1:11434

# api keys
OPENAI_API_KEY = <openai-api-key>
OLLAMA_API_KEY = None # Api keys are required for each model. Set to None if model doesn't need it.

# prompts
LENS_DEFAULT_MAX_NEW_TOKENS = 1024
LENS_DEFAULT_TEMPERATURE = 0.8
LENS_DEFAULT_TOP_K = 50
LENS_DEFAULT_TOP_P = 0.95
LENS_DEFAULT_SYSTEM_PROMPT = "Your name is Nova. You are a a helpful assistant."

API

LENS provides a REST API that can be called from any client. If applicable an endpoint accepts a reqeust body as json formatted dictionary. The api provides the following endpoints:

GET /models Retrieving a list of available models
Parameters

None

Responses
http code content-type example response
200 application/json [{"id":"gpt-3.5-turbo-1106","max_tokens":16385,"provider":"openai"}]

POST /assist application/json Sending a reqeust to a LLM and return the answer
Parameters
name type data type description
model required str The id of the model as provided by /models
provider required str The provider of the model as provided by /models
message required str The prompt that should be send to the model
history optional list[list] A history of previous question-answer-pairs in chronological order
systemprompt optional str Set of instructions that define the model behaviour
data_desc optional str An explanation of how context data should be interpreted by the model
data optional str Additional context data for the llm
stream optional bool If the answer should be streamed
top_k optional int Select among the k most probable next tokens
temperature optional int Degree of randomness to select next token among candidates
api_base optional str Overwrites the api_base of the server for the given provider/model combination
Responses
http code content-type response
200 bytestring A bytestring containing the UTF-8 encoded answer

Requests

import requests
api_base="http://127.0.0.1:1337"
# Making a POST request with the stream parameter set to True to handle streaming responses
with requests.get(api_base + '/models') as response:
    print(response.content)

request = {
    'model': 'llama3.1',
    'provider': 'ollama_chat',
    'message': 'Add the cost of an apple to the last thing I asked you.',
    'system_prompt': 'Your name is LENS. You are a a helpful shopping assistant.',
    'data_desc': 'The data is provided in the form of tuples where the first entry is the name of a fruit, and the second entry is the price of that fruit.',
    'data' : '("apple", "0.50"), ("avocado", "1.0"), ("banana", "0.80")',
    'stream': True,
    'top_k': 50,
    'top_p': 0.95,
    'temperature': 0.8,
    'history': [
        ["How much does a banana cost?", "Hello there! As a helpful shopping assistant, I'd be happy to help you find the price of a banana. According to the data provided, the cost of a banana is $0.80. So, one banana costs $0.80."]
    ]
}

with requests.post(api_base + '/assist', json=request) as response:
    print(response.content)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hcai_lens-1.0.0.tar.gz (8.2 kB view details)

Uploaded Source

Built Distribution

hcai_lens-1.0.0-py3-none-any.whl (8.5 kB view details)

Uploaded Python 3

File details

Details for the file hcai_lens-1.0.0.tar.gz.

File metadata

  • Download URL: hcai_lens-1.0.0.tar.gz
  • Upload date:
  • Size: 8.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for hcai_lens-1.0.0.tar.gz
Algorithm Hash digest
SHA256 65fe7e5f36b1b2c3bc316420ad5e0861e2c309aa954800857c705938c1f8e732
MD5 1e0a45cd9b76ab70982cb3840eeaeec9
BLAKE2b-256 4827b78a38e008d9f02f875ed9ca4a3234430e6a23455355c430e1c9a88e76c1

See more details on using hashes here.

File details

Details for the file hcai_lens-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: hcai_lens-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 8.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for hcai_lens-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2d3d214399024ddb7c63dd80972821eea11f61588317054b48a0cc58567b9cdd
MD5 06c94e7f8dc7107d676ef4f8b4115800
BLAKE2b-256 eb39b5f451490e9f1a0420e55163c7e56f42edd6de549abf8b53cd1bbac45de3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page