Skip to main content

llama-index llms cometapi integration

Project description

LlamaIndex LLM Integration: CometAPI

Installation

To install the required packages, run:

pip install llama-index-llms-cometapi

Setup

Get API Key

  1. Visit CometAPI Console
  2. Sign up for an account (If you don't already have a CometAPI account)
  3. Generate your API key

Initialize CometAPI

You can set the API key either as an environment variable COMETAPI_API_KEY or pass it directly:

from llama_index.llms.cometapi import CometAPI

# Method 1: Using environment variable
# export COMETAPI_API_KEY="your-api-key"
llm = CometAPI(model="gpt-4o-mini")

# Method 2: Direct API key
llm = CometAPI(
    api_key="your-api-key",
    model="gpt-4o-mini",
    max_tokens=256,
    context_window=4096,
)

Usage Examples

Generate Chat Responses

from llama_index.core.llms import ChatMessage

message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)

Streaming Chat

message = ChatMessage(role="user", content="Tell me a story")
resp = llm.stream_chat([message])
for r in resp:
    print(r.delta, end="")

Text Completion

resp = llm.complete("Tell me a joke")
print(resp)

Streaming Completion

resp = llm.stream_complete("Tell me a story")
for r in resp:
    print(r.delta, end="")

Available Models

CometAPI supports various state-of-the-art models:

GPT Series

  • gpt-5-chat-latest
  • chatgpt-4o-latest
  • gpt-5-mini
  • gpt-4o-mini
  • gpt-4.1-mini

Claude Series

  • claude-opus-4-1-20250805
  • claude-sonnet-4-20250514
  • claude-3-5-haiku-latest

Gemini Series

  • gemini-2.5-pro
  • gemini-2.5-flash
  • gemini-2.0-flash

Others

  • deepseek-v3.1
  • grok-4-0709
  • qwen3-30b-a3b

For complete list, visit: https://api.cometapi.com/pricing

Model Configuration

# Use different models
llm_claude = CometAPI(model="claude-3-5-haiku-latest")
llm_gemini = CometAPI(model="gemini-2.5-flash")
llm_deepseek = CometAPI(model="deepseek-v3.1")

response = llm_claude.complete("Explain quantum computing")
print(response)

Documentation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_cometapi-0.1.0.tar.gz (4.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_cometapi-0.1.0-py3-none-any.whl (4.3 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_cometapi-0.1.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_cometapi-0.1.0.tar.gz
Algorithm Hash digest
SHA256 489b894b08edb7190ae601ad6bb385e0c728805ead13166393d1ac566be57e25
MD5 1d0b118d10d5a005dc697b82a3970314
BLAKE2b-256 84de7d7b394f95110e9e92dedc10e76e6655a4310d4e93b1005906b1418d2c08

See more details on using hashes here.

File details

Details for the file llama_index_llms_cometapi-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_cometapi-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0397b98465609666a19fb1203c68ad9db0e8853a4c8c81292ec94629f7fec43d
MD5 2ccaea15d24f5268bd950d7dfa0c88f3
BLAKE2b-256 38b8a7355b273e684cdb22fdc2e52b21edbdb9349fd81034011606235d72cc0a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page