Skip to main content

A user toolkit for analyzing and interfacing with Large Language Models (LLMs)

Project description

Kaleidoscope

Kaleidoscope-SDK

PyPI PyPI - Python Version GitHub DOI Documentation

A user toolkit for analyzing and interfacing with Large Language Models (LLMs)

Overview

kaleidoscope-sdk is a Python module used to interact with large language models hosted via the Kaleidoscope service available at: https://github.com/VectorInstitute/kaleidoscope. It provides a simple interface to launch LLMs on an HPC cluster, asking them to perform basic features like text generation, but also retrieve intermediate information from inside the model, such as log probabilities and activations. These features are exposed via a few high-level APIs, namely:

  • model_instances - Shows a list of all active LLMs instantiated by the model service
  • load_model - Loads an LLM via the model service
  • generate - Returns an LLM text generation based on prompt input
  • module_names - Returns all modules names in the LLM neural network
  • get_activations - Retrieves all activations for a set of modules

Getting Started

Requires Python version >= 3.8

Install

python3 -m pip install kscope

or install from source:

pip install git+https://github.com/VectorInstitute/kaleidoscope-sdk.git

Authentication

In order to submit generation jobs, a designated Vector Institute cluster account is required. Please contact the AI Engineering Team in charge of Kaleidoscope for more information.

Sample Workflow

The following workflow shows how to load and interact with an OPT-175B model on the Vector Institute Vaughan cluster.

#!/usr/bin/env python3
import kscope
import time

# Establish a client connection to the Kaleidoscope service
# If you have not previously authenticated with the service, you will be prompted to now
client = kscope.Client(gateway_host="llm.cluster.local", gateway_port=3001)

# See which models are supported
client.models

# See which models are instantiated and available to use
client.model_instances

# Get a handle to a model. If this model is not actively running, it will get launched in the background.
# In this example we want to use the LLama2 7b model
llama2_model = client.load_model("llama2-7b")

# If the model was not actively running, this it could take several minutes to load. Wait for it come online.
while llama2_model.state != "ACTIVE":
    time.sleep(1)

# Sample text generation w/ input parameters
text_gen = llama2_model.generate("What is Vector Institute?", {'max_tokens': 5, 'top_k': 4, 'temperature': 0.5})
dir(text_gen) # display methods associated with generated text object
text_gen.generation['sequences'] # display only text
text_gen.generation['logprobs'] # display logprobs
text_gen.generation['tokens'] # display tokens

# Now let's retrieve some activations from the model
# First, show a list of modules in the neural network
print(llama2_model.module_names)

# Setup a request for module acivations for a certain module layer
requested_activations = ['layers.0']
activations = llama2_model.get_activations("What are activations?", requested_activations)
print(activations)

# Next, let's manipulate the activations in the model. First, we need to import a few more modules.
import cloudpickle
import codecs
import torch
from torch import Tensor
from typing import Callable, Dict

# Define a function to manipulate the activations
def replace_with_ones(act: Tensor) -> Tensor:
    """Replace an activation with an activation filled with ones."""
    out = torch.ones_like(act, dtype=act.dtype).cuda()
    return out

# Now send the edit request
editing_fns: Dict[str, Callable] = {}
editing_fns['layers.0'] = replace_with_ones
edited_activations = llama2_model.edit_activations("What is Vector Institute?", editing_fns)
print(edited_activations)

Documentation

Full documentation and API reference are available at: http://kaleidoscope-sdk.readthedocs.io.

Contributing

Contributing to kaleidoscope is welcomed. See Contributing for guidelines.

License

MIT

Citation

Reference to cite when you use Kaleidoscope in a project or a research paper:

Willes, J., Choi, M., Coatsworth, M., Shen, G., & Sivaloganathan, J (2022). Kaleidoscope. http://VectorInstitute.github.io/kaleidoscope. computer software, Vector Institute for Artificial Intelligence. Retrieved from https://github.com/VectorInstitute/kaleidoscope-sdk.git.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kscope-0.10.0.tar.gz (8.1 kB view details)

Uploaded Source

Built Distribution

kscope-0.10.0-py3-none-any.whl (8.3 kB view details)

Uploaded Python 3

File details

Details for the file kscope-0.10.0.tar.gz.

File metadata

  • Download URL: kscope-0.10.0.tar.gz
  • Upload date:
  • Size: 8.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for kscope-0.10.0.tar.gz
Algorithm Hash digest
SHA256 024cb04ff4473235cfeb12665dac32fdd1fe96282cf15d9e4b6597561177ad42
MD5 df5868b701710ff02aef901f40ddfc4c
BLAKE2b-256 6cbd90ecff6a24fad8df9e5d46790be16ef8c03bb3817c0299d14787151b406c

See more details on using hashes here.

File details

Details for the file kscope-0.10.0-py3-none-any.whl.

File metadata

  • Download URL: kscope-0.10.0-py3-none-any.whl
  • Upload date:
  • Size: 8.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for kscope-0.10.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9031dd2a4624145a9441478e7aff2dc59c073cac83a7f834c4b3fda703b32b73
MD5 6174058bb4dd941ab48e996f50e9422b
BLAKE2b-256 ee5c4ef3cb5651fcee70c419ce48ef82b1ea524e6272c9ca87b32c7c1e36eced

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page