Skip to main content

LocalLab: Run language models locally or in Google Collab with a friendly API

Project description

๐Ÿš€ LocalLab: Your Personal AI Lab

Run powerful AI language models on your own computer or Google Colab - no cloud services needed! Think of it as having ChatGPT-like capabilities right on your machine.

๐Ÿค” What is LocalLab?

LocalLab brings AI to your fingertips with two key components:

graph TD
    A[Your Code] -->|Uses| B[LocalLab Client]
    B -->|Talks to| C[LocalLab Server]
    C -->|Runs| D[AI Models]
    C -->|Manages| E[Memory & Resources]
    C -->|Optimizes| F[Performance]

๐ŸŽฏ Key Features

๐Ÿ“ฆ Easy Setup         ๐Ÿ”’ Privacy First       ๐ŸŽฎ Free GPU Access
๐Ÿค– Multiple Models    ๐Ÿ’พ Memory Efficient    ๐Ÿ”„ Auto-Optimization
๐ŸŒ Local or Colab    โšก Fast Response       ๐Ÿ”ง Simple API

๐ŸŒŸ Two Ways to Run

  1. On Your Computer (Local Mode)

    ๐Ÿ’ป Your Computer
    โ””โ”€โ”€ ๐Ÿš€ LocalLab Server
        โ””โ”€โ”€ ๐Ÿค– AI Model
            โ””โ”€โ”€ ๐Ÿ”ง Auto-optimization
    
  2. On Google Colab (Free GPU Mode)

    โ˜๏ธ Google Colab
    โ””โ”€โ”€ ๐ŸŽฎ Free GPU
        โ””โ”€โ”€ ๐Ÿš€ LocalLab Server
            โ””โ”€โ”€ ๐Ÿค– AI Model
                โ””โ”€โ”€ โšก GPU Acceleration
    

๐Ÿ“ฆ Installation & Setup

1. Install Required Packages

# Install both server and client packages
pip install locallab locallab-client

2. Configure the Server (Recommended)

# Run interactive configuration
locallab config

# This will help you set up:
# - Model selection
# - Memory optimizations
# - GPU settings
# - System resources

3. Start the Server

# Start with saved configuration
locallab start

# Or start with specific options
locallab start --model microsoft/phi-2 --quantize --quantize-type int8

๐Ÿ’ก Basic Usage

Synchronous Usage (Easier for Beginners)

from locallab_client import SyncLocalLabClient

# Connect to server
client = SyncLocalLabClient("http://localhost:8000")

try:
    print("Generating text...")
    # Generate text
    response = client.generate("Write a story")
    print(response)

    print("Streaming responses...")
    # Stream responses
    for token in client.stream_generate("Tell me a story"):
       print(token, end="", flush=True)

    print("Chat responses...")
    # Chat with AI
    response = client.chat([
        {"role": "system", "content": "You are helpful."},
        {"role": "user", "content": "Hello!"}
    ])
    print(response.choices[0]["message"]["content"])

finally:
    # Always close the client
    client.close()

Asynchronous Usage (For Advanced Users)

import asyncio
from locallab_client import LocalLabClient

async def main():
    # Connect to server
    client = LocalLabClient("http://localhost:8000")
    
    try:
        print("Generating text...")
        # Generate text
        response = await client.generate("Write a story")
        print(response)

        print("Streaming responses...")
        # Stream responses
        async for token in client.stream_generate("Tell me a story"):
            print(token, end="", flush=True)

        print("\nChatting with AI...")
        # Chat with AI
        response = await client.chat([
            {"role": "system", "content": "You are helpful."},
            {"role": "user", "content": "Hello!"}
        ])
        # Extracting Content
        content = response['choices'][0]['message']['content']
        print(content)
    finally:
        # Always close the client
        await client.close()

# Run the async function
asyncio.run(main())

๐ŸŒ Google Colab Usage

Run LocalLab on Google's free GPUs:

# 1. Install packages
!pip install locallab locallab-client

# 2. Configure with CLI (notice the ! prefix)
!locallab config

# 3. Start server with CLI
!locallab start --use-ngrok

# 4. Connect client (Locally)
from locallab_client import LocalLabClient
client = LocalLabClient("https://your-server-ngrok-url.app")
response = await client.generate("Hello!")

๐Ÿ’ป Requirements

Local Computer

  • Python 3.8+
  • 4GB RAM minimum (8GB+ recommended)
  • GPU optional but recommended
  • Internet connection for downloading models

Google Colab

  • Just a Google account!
  • Free tier works fine

๐ŸŒŸ Features

  • Easy Setup: Just pip install and run
  • Multiple Models: Use any Hugging Face model
  • Resource Efficient: Automatic optimization
  • Privacy First: All local, no data sent to cloud
  • Free GPU: Google Colab integration
  • Flexible Client API: Both async and sync clients available
  • Automatic Resource Management: Sessions close automatically

โžก๏ธ See All Features

๐Ÿ“š Documentation

Getting Started

  1. Installation Guide
  2. Basic Examples
  3. CLI Usage

Advanced Topics

  1. API Reference
  2. Client Libraries
  3. Advanced Features
  4. Performance Guide

Deployment

  1. Local Setup
  2. Google Colab Guide

๐Ÿ” Need Help?

๐Ÿ“– Additional Resources

๐ŸŒŸ Star Us!

If you find LocalLab helpful, please star our repository! It helps others discover the project.


Made with โค๏ธ by Utkarsh Tiwari GitHub โ€ข Twitter โ€ข LinkedIn

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

locallab-0.5.1.tar.gz (56.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

locallab-0.5.1-py3-none-any.whl (60.2 kB view details)

Uploaded Python 3

File details

Details for the file locallab-0.5.1.tar.gz.

File metadata

  • Download URL: locallab-0.5.1.tar.gz
  • Upload date:
  • Size: 56.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.22

File hashes

Hashes for locallab-0.5.1.tar.gz
Algorithm Hash digest
SHA256 9246d40cbcd8d4c647581c36d0e3a5abfe25abb875f4b201981f2598f985c47f
MD5 e2fb082a06c4ca54ccb4042043ab6795
BLAKE2b-256 8a841a1b83e6edf641d2b3afa2e978117ee13232e72a82d8658bbdab7a57d3fa

See more details on using hashes here.

File details

Details for the file locallab-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: locallab-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 60.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.22

File hashes

Hashes for locallab-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 75b5d79d89c80687ca76de82be056a5c8211c99f3e4606c023fb33907cae662a
MD5 8cf7e7f24cc567dfcb3031d80ab21609
BLAKE2b-256 c0b3eee204b425594147431cfc7c9ab574dcf31962c58aee4b59eee8f2dda7c3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page