Skip to main content

Fast, minimalist, multi-model terminal-based SDK for building, testing, and interacting with LLMs via cloud APIs.

Project description

FastCCG

Python License PyPI GitHub Stars GitHub Issues Documentation

FastCCG is a simple, powerful, and developer-friendly Python library for interacting with Large Language Models (LLMs). It provides a clean, unified API to work with models from leading providers like OpenAI, Google, Anthropic, and Mistral, making it easy to build, test, and deploy AI-powered applications.

🚀 Key Features

  • 🔄 Unified API: Switch between different LLM providers with minimal code changes
  • ⚡ Async Support: Built-in asynchronous operations for high-performance applications
  • 🌊 Streaming: Real-time response streaming for interactive experiences
  • 💾 Session Management: Save and restore conversation history
  • 🖥️ CLI Interface: Powerful command-line tools for quick testing and interaction
  • 🔧 Easy Configuration: Chainable methods for clean, readable code
  • 🛡️ Error Handling: Robust error handling with custom exceptions

🏗️ Supported Providers

Provider Models Status
OpenAI GPT-4o, GPT-3.5 Turbo ✅ Fully Supported
Google Gemini 1.5 Pro, Gemini 1.5 Flash ✅ Fully Supported
Mistral Mistral Tiny, Small, Medium ✅ Fully Supported
Anthropic Claude 3 Sonnet ✅ Fully Supported

📦 Installation

pip install fastccg

⚡ Quick Start

import fastccg
from fastccg.models.gpt import gpt_4o

# Add your API key
api_key = fastccg.add_openai_key("sk-...")

# Initialize the model
model = fastccg.init_model(gpt_4o, api_key=api_key)

# Ask a question
response = model.ask("What is the best thing about Large Language Models?")
print(response.content)

🖥️ CLI Usage

FastCCG comes with a powerful CLI for quick interactions:

# List available models
fastccg models

# Ask a single question
fastccg ask "What is the capital of France?" --model gpt-4o

# Start an interactive chat session
fastccg chat --model gpt-4o

🔄 Advanced Features

Asynchronous Operations

import asyncio

async def main():
    # Run multiple prompts concurrently
    task1 = model.ask_async("What is the speed of light?")
    task2 = model.ask_async("What is the capital of Australia?")
    
    responses = await asyncio.gather(task1, task2)
    for response in responses:
        print(response.content)

asyncio.run(main())

Streaming Responses

async def stream_example():
    async for chunk in model.ask_stream("Tell me a story"):
        print(chunk.content, end="", flush=True)

asyncio.run(stream_example())

Session Management

# Save conversation
model.save("my_session.json")

# Load conversation later
loaded_model = fastccg.load_model("my_session.json", api_key=api_key)

📚 Documentation

Comprehensive documentation is available in the docs/ directory:

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🌟 Why FastCCG?

  • Developer Experience: Clean, intuitive API that just works
  • Performance: Built with async-first architecture for scalable applications
  • Flexibility: Easy to switch between providers and models
  • Reliability: Comprehensive error handling and testing
  • Community: Open source with active development and support

📖 Read the Full Documentation | 🚀 Get Started Now | 💬 Join the Discussion

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastccg-0.1.0.tar.gz (14.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastccg-0.1.0-py3-none-any.whl (15.4 kB view details)

Uploaded Python 3

File details

Details for the file fastccg-0.1.0.tar.gz.

File metadata

  • Download URL: fastccg-0.1.0.tar.gz
  • Upload date:
  • Size: 14.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for fastccg-0.1.0.tar.gz
Algorithm Hash digest
SHA256 5f69a66c8037ed02033b7323bde5543963274584c0c32e1e4fa27583771d0fa4
MD5 1f1f5ff0226e02aa04171c6158e231b7
BLAKE2b-256 14dad5f861f7541c18b92c247ea4a8aede916e93a5f0967daa9f690e2730229d

See more details on using hashes here.

File details

Details for the file fastccg-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: fastccg-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 15.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for fastccg-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0578caab7b3817c495f727abd5023bb01ef6f415c746a76bb193c4170e1f45de
MD5 08dd73855cbaa5d1f834453be731d698
BLAKE2b-256 819857110704bf6ecc6229053024830f729beb438450d7dd2e458794aec050e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page