Skip to main content

llama-index llms cometapi integration

Project description

LlamaIndex LLM Integration: CometAPI

Installation

To install the required packages, run:

pip install llama-index-llms-cometapi

Setup

Get API Key

  1. Visit CometAPI Console
  2. Sign up for an account (If you don't already have a CometAPI account)
  3. Generate your API key

Initialize CometAPI

You can set the API key either as an environment variable COMETAPI_API_KEY or pass it directly:

from llama_index.llms.cometapi import CometAPI

# Method 1: Using environment variable
# export COMETAPI_API_KEY="your-api-key"
llm = CometAPI(model="gpt-4o-mini")

# Method 2: Direct API key
llm = CometAPI(
    api_key="your-api-key",
    model="gpt-4o-mini",
    max_tokens=256,
    context_window=4096,
)

Usage Examples

Generate Chat Responses

from llama_index.core.llms import ChatMessage

message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)

Streaming Chat

message = ChatMessage(role="user", content="Tell me a story")
resp = llm.stream_chat([message])
for r in resp:
    print(r.delta, end="")

Text Completion

resp = llm.complete("Tell me a joke")
print(resp)

Streaming Completion

resp = llm.stream_complete("Tell me a story")
for r in resp:
    print(r.delta, end="")

Available Models

CometAPI supports various state-of-the-art models:

GPT Series

  • gpt-5-chat-latest
  • chatgpt-4o-latest
  • gpt-5-mini
  • gpt-4o-mini
  • gpt-4.1-mini

Claude Series

  • claude-opus-4-1-20250805
  • claude-sonnet-4-20250514
  • claude-3-5-haiku-latest

Gemini Series

  • gemini-2.5-pro
  • gemini-2.5-flash
  • gemini-2.0-flash

Others

  • deepseek-v3.1
  • grok-4-0709
  • qwen3-30b-a3b

For complete list, visit: https://api.cometapi.com/pricing

Model Configuration

# Use different models
llm_claude = CometAPI(model="claude-3-5-haiku-latest")
llm_gemini = CometAPI(model="gemini-2.5-flash")
llm_deepseek = CometAPI(model="deepseek-v3.1")

response = llm_claude.complete("Explain quantum computing")
print(response)

Documentation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_cometapi-0.1.1.tar.gz (4.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_cometapi-0.1.1-py3-none-any.whl (4.3 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_cometapi-0.1.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_cometapi-0.1.1.tar.gz
Algorithm Hash digest
SHA256 d7e3e27b62275cb68a238323da4dbcb253dbb90c4b92385b91092e46eb827c21
MD5 74704d822b2931fddd9ee7385673cce5
BLAKE2b-256 074fd29f44a62e3c9dc674be52a9f96cc3f5ae0f9e7ca322978c207df0dd3978

See more details on using hashes here.

File details

Details for the file llama_index_llms_cometapi-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_cometapi-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f12ab7ca96795e06e98d9f8d7f209bf555dc22ab6d156c0fb9de30b715f40ed8
MD5 ea826f8cf20a8e813db3522edcec801d
BLAKE2b-256 cdbd7cf0902ca0ae33373d4ea7324d513efcfba034aa648ed29a5c8658a8cdde

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page