Skip to main content

Wrapped API for Neural Condense Subnet - Bittensor

Project description

🚀 Organic API Usage for Neural Condense Subnet 🌐

Empowered by Bittensor


🌟 Overview

The Neural Condense Subnet (NCS) library provides an efficient and intuitive interface to compress extensive input contexts into concise, high-relevance formats. This optimization is especially beneficial when working with large language models (LLMs) that have token limitations, as it allows you to maximize the use of input constraints, enhancing inference efficiency.

📦 Installation

Install the library using pip:

pip install neural-condense

🛠️ Usage

Quick Start in Python

This example demonstrates how to initialize the CondenseClient, define a message context, generate condensed tokens, and apply them in an LLM pipeline.

  1. Get condense your long messages into condensed tokens.
from neural_condense import CondenseClient, SAT_TOKEN
import numpy as np

# Initialize the client with your API key
client = CondenseClient(
  api_key="your_api_key", 
  model_name="mistralai/Mistral-7B-Instruct-v0.2"
)

# Define a long context and focused prompt
messages = [
  {
    "role": "user",
    "content": "Many of you think that EPL and other salary levels are similar, but you are wrong. In EPL, the media glosses over pre-tax salary information, while in Serie A they deal with salary. That means the salary that Milan must pay Donnarumma if they agree to sign the contract is 24m/season + 20m in salary. No one pays that much money for a goalkeeper... What is the salary that Milan must pay Donnarumma if they agree to sign the contract?"
  },
  {
    "role": "assistant",
    "content": f"The salary that Milan must pay Donnarumma if they agree to sign the contract is 24m/season + 20m in salary. {SAT_TOKEN}"
  },
  {
    "role": "user",
    "content": "Who is Donnarumma?"
  }
]

# Generate condensed tokens
condensed_output = client.create_condensed_tokens(
    messages=messages,
    tier="inference_0", 
)

# Check the shape of the condensed tokens
print(f"Condensed tokens shape: {condensed_output.condensed_tokens.shape}")
  1. Apply the condensed tokens in an LLM pipeline.
# Example: Using the condensed tokens in an LLM pipeline
from transformers import pipeline

# Initialize language model (Hugging Face transformers)
llm = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.2")

# Use condensed embeddings as input
output = llm(inputs_embeds=condensed_output.inputs_embeds, max_new_tokens=100)

print(output)

Asynchronous Usage 🌐

For asynchronous contexts, use AsyncCondenseClient to handle requests without blocking execution.

from neural_condense import AsyncCondenseClient
import asyncio

async def main():
    client = AsyncCondenseClient(api_key="your_api_key")
    condensed_output = await client.create_condensed_tokens(
        messages=messages,
        tier="inference_0", 
        target_model="mistralai/Mistral-7B-Instruct-v0.2"
    )
    print(f"Condensed tokens shape: {condensed_output.inputs_embeds.shape}")

asyncio.run(main())

🔍 Additional Information

Supported Models

The library supports a variety of pre-trained models available through Hugging Face's model hub. Ensure that the model you choose is compatible with the Neural Condense Subnet’s framework.

SAT_TOKEN

The SAT_TOKEN acts as a delimiter within your message templates, separating context and prompts. This token helps guide the API in recognizing specific sections of input messages, optimizing them for compression.

API Parameters

  • tier: Specify the inference tier, which affects the quality and speed of token condensation.
  • target_model: Set the target model to shape the condensed output according to the requirements of the chosen language model.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_condense-0.0.3.tar.gz (4.6 kB view details)

Uploaded Source

Built Distribution

neural_condense-0.0.3-py3-none-any.whl (5.3 kB view details)

Uploaded Python 3

File details

Details for the file neural_condense-0.0.3.tar.gz.

File metadata

  • Download URL: neural_condense-0.0.3.tar.gz
  • Upload date:
  • Size: 4.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for neural_condense-0.0.3.tar.gz
Algorithm Hash digest
SHA256 c5e1ed43ff057e41138b5754cbb42cdfe5ffac0be6cc783cfc246350bfe3ade7
MD5 86bb3769ab571508fb8c24b5afc794b5
BLAKE2b-256 73fbd993410ade919b497af853ad7b3465b32dc075a51eb2bbe7d6462b1927ad

See more details on using hashes here.

File details

Details for the file neural_condense-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for neural_condense-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 b31b713fd61805106a87ca78258813bbbd4df849f34d29813e1d2952ca5dd2c4
MD5 14bf5f7f1d6334959b963bdaaceb31a4
BLAKE2b-256 1e5f14e046af6c084c62930d6904ee19dd4a9424a7d45aa2034146be406a410b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page