Skip to main content

LocalLab: Run language models locally or in Google Collab with a friendly API

Project description

🚀 LocalLab: Your Personal AI Lab

LocalLab lets you run AI language models on your computer or Google Colab - no cloud services needed! Think of it as having ChatGPT-like capabilities right on your machine.

🤔 What is LocalLab?

LocalLab consists of two parts working together:

graph LR
    A[LocalLab Server] -->|Runs| B[AI Models]
    C[Your Code] -->|Uses| D[LocalLab Client] -->|Talks to| A

The Server (Your AI Engine)

Think of the server as your personal AI engine. It:

  • Downloads and runs AI models on your computer
  • Manages memory and resources automatically
  • Optimizes performance based on your hardware
  • Provides a simple API for accessing models

You can run it:

  • On your computer (local mode)
  • On Google Colab (free GPU mode)

The Client (Your AI Controller)

The client is how your code talks to the AI. It:

  • Connects to your LocalLab server
  • Sends requests for text generation
  • Handles chat conversations
  • Processes multiple requests at once
  • Streams responses in real-time

✨ How It Works Together

When you use LocalLab:

  1. Server Setup

    from locallab import start_server
    start_server()  # Server starts and loads AI model
    
  2. Client Connection

     # Async Usage
         from locallab_client import LocalLabClient # Async client
         server_url = "http://localhost:8000" # or "https://your-ngrok-url.ngrok.app"
         client = LocalLabClient(server_url)
     # Sync Usage
         from locallab_client import SyncLocalLabClient # Sync client
         server_url = "http://localhost:8000" # or "https://your-ngrok-url.ngrok.app"
         client = SyncLocalLabClient(server_url)
    
  3. AI Interaction

    # Your code sends requests through the client
    # Async usage
    response = await client.generate("Write a story")
    print(response)  # Server processes and returns AI response
    
    # Or sync usage (New!)
    response = client.generate("Write a story")
    print(response)  # Same result, no async/await needed!
    

💡 Quick Examples

# Install the client
# pip install locallab-client

# Import the appropriate client
from locallab_client import LocalLabClient       # Async client
from locallab_client import SyncLocalLabClient  # Sync client

# Generate text (async or sync)
response = await client.generate("Hello!")  # Async with LocalLabClient
response = client.generate("Hello!")        # Sync with SyncLocalLabClient

# Chat with AI (async or sync)
response = await client.chat([              # Async with LocalLabClient
    {"role": "user", "content": "Hi!"}
])
response = client.chat([                    # Sync with SyncLocalLabClient
    {"role": "user", "content": "Hi!"}
])

# Process multiple prompts (async or sync)
responses = await client.batch_generate([   # Async with LocalLabClient
    "Write a joke",
    "Tell a story"
])
responses = client.batch_generate([         # Sync with SyncLocalLabClient
    "Write a joke",
    "Tell a story"
])

➡️ See More Examples

💻 Requirements

Local Computer:

  • Python 3.8+
  • 4GB RAM minimum
  • GPU optional (but recommended)

Google Colab:

  • Just a Google account!
  • Free tier works fine

📚 Getting Started

1. Choose Your Path

New to AI/Programming?

  1. Start with our Getting Started Guide
  2. Try the Basic Examples
  3. Join our Community

Developer?

  1. Check API Reference
  2. See Client Libraries
  3. Read Advanced Features

2. Read the Docs

Our Documentation Guide will help you:

  • Understand LocalLab's features
  • Learn best practices
  • Find solutions to common issues
  • Master advanced features

🌟 Features

  • Easy Setup: Just pip install and run
  • Multiple Models: Use any Hugging Face model
  • Resource Efficient: Automatic optimization
  • Privacy First: All local, no data sent to cloud
  • Free GPU: Google Colab integration
  • Flexible Client API: Both async and sync clients available (New!)
  • Automatic Resource Management: Sessions close automatically (New!)

➡️ See All Features

🔍 Need Help?

📖 Additional Resources


Made with ❤️ by Utkarsh Tiwari GitHubTwitterLinkedIn

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

locallab-0.4.49.tar.gz (55.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

locallab-0.4.49-py3-none-any.whl (59.8 kB view details)

Uploaded Python 3

File details

Details for the file locallab-0.4.49.tar.gz.

File metadata

  • Download URL: locallab-0.4.49.tar.gz
  • Upload date:
  • Size: 55.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.22

File hashes

Hashes for locallab-0.4.49.tar.gz
Algorithm Hash digest
SHA256 d3488f91713861f4547f484cc110d4ac70b96449add2fdae2c64ec852bd7b1c4
MD5 c4c10197236e7a2bf11738460062fe45
BLAKE2b-256 942a1d8ebf598925d66db12d2cc5e95854f6a106c32f00f1b5fb8dba0633abd8

See more details on using hashes here.

File details

Details for the file locallab-0.4.49-py3-none-any.whl.

File metadata

  • Download URL: locallab-0.4.49-py3-none-any.whl
  • Upload date:
  • Size: 59.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.22

File hashes

Hashes for locallab-0.4.49-py3-none-any.whl
Algorithm Hash digest
SHA256 9b71d2a66b06f7902bcbe4ab199c1614c54fdb5b5a50dd03c750b669c1a76450
MD5 29d37dfa5b52b18a63c6f7fad009c3f6
BLAKE2b-256 5682cb764ce4c9d61a8f8bff90377432373291899e814ed56fab09d0f2b6c1b8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page