Skip to main content

Fast, minimalist, multi-model terminal-based SDK for building, testing, and interacting with LLMs via cloud APIs.

Project description

FastCCG (Fast Conversational & Completion Gateway)

Python License PyPI GitHub Stars GitHub Issues Documentation

FastCCG is a simple, powerful, and developer-friendly Python library for interacting with Large Language Models (LLMs). It provides a clean, unified API to work with models from leading providers like OpenAI, Google, Anthropic, and Mistral, making it easy to build, test, and deploy AI-powered applications.

🚀 Key Features

  • 🔄 Unified API: Switch between different LLM providers with minimal code changes
  • ⚡ Async Support: Built-in asynchronous operations for high-performance applications
  • 🧠 Retrieval-Augmented Generation (RAG): Build powerful Q&A systems over your own documents
  • ✨ Text Embedding: Convert text into vector representations for semantic search
  • 🌊 Streaming: Real-time response streaming for interactive experiences
  • 💾 Session Management: Save and restore conversation history
  • 🖥️ CLI Interface: Powerful command-line tools for quick testing and interaction
  • 🔧 Easy Configuration: Chainable methods for clean, readable code
  • 🛡️ Error Handling: Robust error handling with custom exceptions

🏗️ Supported Providers

Provider Models Status
OpenAI GPT-4o, GPT-3.5 Turbo ✅ Fully Supported
Google Gemini 1.5 Pro, Gemini 1.5 Flash ✅ Fully Supported
Mistral Mistral Tiny, Small, Medium ✅ Fully Supported
Anthropic Claude 3 Sonnet ✅ Fully Supported

📦 Installation

pip install fastccg

⚡ Quick Start

import fastccg
from fastccg.models.gpt import gpt_4o

# Add your API key
api_key = fastccg.add_openai_key("sk-...")

# Initialize the model
model = fastccg.init_model(gpt_4o, api_key=api_key)

# Ask a question
response = model.ask("What is the best thing about Large Language Models?")
print(response.content)

🖥️ CLI Usage

FastCCG comes with a powerful CLI for quick interactions:

# List available models
fastccg models

# Ask a single question
fastccg ask "What is the capital of France?" --model gpt_4o

# Start an interactive chat session
fastccg chat --model gpt_4o

🧠 Retrieval-Augmented Generation (RAG)

Build a powerful question-answering system over your own documents with just a few lines of code. FastCCG handles the complexity of embedding, indexing, and context retrieval for you.

import asyncio
import fastccg
from fastccg.models.gpt import gpt_4o
from fastccg.embedding.openai import text_embedding_3_small
from fastccg.rag import RAGModel

# 1. Setup API keys and models
api_key = fastccg.add_openai_key("sk-...")
llm = fastccg.init_model(gpt_4o, api_key=api_key)
embedder = OpenAIEmbedding(api_key=api_key)

# 2. Create and configure the RAG model
rag = RAGModel(llm=llm, embedder=embedder)

# 3. Index your documents
documents = {
    "doc1": "The sky is blue during a clear day.",
    "doc2": "The grass in the park is typically green."
}

# 4. Ask a question related to your documents
async def main():
    response = await rag.ask_async("What color is the sky?")
    print(response.content)
    # Expected output will be based on the indexed context

asyncio.run(main())

# 5. Save your knowledge base for later use
rag.save("my_knowledge.fcvs", pretty_print=True)

🔄 Advanced Features

Asynchronous Operations

import asyncio

async def main():
    # Run multiple prompts concurrently
    task1 = model.ask_async("What is the speed of light?")
    task2 = model.ask_async("What is the capital of Australia?")
    
    responses = await asyncio.gather(task1, task2)
    for response in responses:
        print(response.content)

asyncio.run(main())

Streaming Responses

async def stream_example():
    async for chunk in model.ask_stream("Tell me a story"):
        print(chunk.content, end="", flush=True)

asyncio.run(stream_example())

Session Management

# Save conversation
model.save("my_session.json")

# Load conversation later
loaded_model = fastccg.load_model("my_session.json", api_key=api_key)

📚 Documentation

Comprehensive documentation is available in the docs/ directory:

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🌟 Why FastCCG?

  • Developer Experience: Clean, intuitive API that just works
  • Performance: Built with async-first architecture for scalable applications
  • Flexibility: Easy to switch between providers and models
  • Reliability: Comprehensive error handling and testing
  • Community: Open source with active development and support

📖 Read the Full Documentation | 🚀 Get Started Now | 💬 Join the Discussion

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastccg-0.2.0.post1.tar.gz (22.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastccg-0.2.0.post1-py3-none-any.whl (27.2 kB view details)

Uploaded Python 3

File details

Details for the file fastccg-0.2.0.post1.tar.gz.

File metadata

  • Download URL: fastccg-0.2.0.post1.tar.gz
  • Upload date:
  • Size: 22.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for fastccg-0.2.0.post1.tar.gz
Algorithm Hash digest
SHA256 f20f5421773f8be7acb86ca68a4c746554325f24c83be161efaa9936ff64a95c
MD5 9ddc45c52232781f51b36c2aa8cfbfac
BLAKE2b-256 37eb52203c56ce2bc1ad610ccf1f5873b4a92bf7e9ada51141a2e6ab4ccb4d2c

See more details on using hashes here.

File details

Details for the file fastccg-0.2.0.post1-py3-none-any.whl.

File metadata

  • Download URL: fastccg-0.2.0.post1-py3-none-any.whl
  • Upload date:
  • Size: 27.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for fastccg-0.2.0.post1-py3-none-any.whl
Algorithm Hash digest
SHA256 992897d6decf4a0aa256dc8d67a1a748460841746696016a5487ce33b11e250b
MD5 b9e3b6ed2488ced0d9dfa8be60c018ec
BLAKE2b-256 1f83d501cd11af9bd0bfe3a9730a5c164015a103da91339dfa149ded2357f77f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page