Skip to main content

A Python package for interacting with models

Project description

TextxGen Logo

TextxGen

A powerful Python package for seamless interaction with Large Language Models

PyStack   Telegram   Instagram   PyPI   Python

TextxGen is a Python package that provides a seamless interface to interact with Large Language Models. It supports chat-based conversations and text completions using predefined models. The package is designed to be simple, modular, and easy to use, making it ideal for developers who want to integrate LLM models into their applications.


Features

  • Predefined API Key: No need to provide your own API key—TextxGen uses a predefined key internally.
  • Chat and Completions: Supports both chat-based conversations and text completions.
  • System Prompts: Add system-level prompts to guide model interactions.
  • Error Handling: Robust exception handling for API failures, invalid inputs, and network issues.
  • Modular Design: Easily extendable to support additional models in the future.

Installation

You can install TextxGen in one of two ways:

Option 1: Install via pip

pip install textxgen

Option 2: Clone the Repository

  1. Clone the repository from GitHub:
    git clone https://github.com/Sohail-Shaikh-07/textxgen.git
    
  2. Navigate to the project directory:
    cd textxgen
    
  3. Install the package locally:
    pip install .
    

Key Concepts

Before diving into the API, here's a quick overview of the main components:

  • ChatEndpoint: Designed for conversational AI. It takes a list of messages (user, system, assistant) and maintains the context of a conversation. Use this for chatbots or interactive assistants.
  • CompletionsEndpoint: Designed for text generation. It takes a single text prompt and generates a continuation. Use this for tasks like story writing, code completion, or summarization.
  • Streaming: Allows you to receive the response chunk by chunk in real-time, rather than waiting for the entire response to finish. This creates a more responsive user experience.
  • ModelsEndpoint: A utility to list all supported models and their IDs, helping you choose the right model for your task.
  • System Prompts: Special instructions given to the model at the start of a chat to define its behavior, persona, or constraints (e.g., "You are a helpful coding assistant").
  • Temperature: A parameter (0.0 to 1.0) that controls the creativity of the response. Lower values (e.g., 0.2) make it more focused and deterministic, while higher values (e.g., 0.8) make it more creative and random.
  • Tokens: The basic units of text used by LLMs (roughly 4 characters or 0.75 words). The max_tokens parameter limits the length of the generated response.

API Reference

Chat Endpoint

The Chat Endpoint provides chat-based interactions with the model.

Parameters

Parameter Type Default Description
messages list required List of chat messages with role and content
model str "grok4.1_fast" Model identifier to use
system_prompt str None Optional system prompt to set context
temperature float 0.7 Sampling temperature (0.0 to 1.0)
max_tokens int 100 Maximum tokens to generate
stream bool False Whether to stream the response
raw_response bool False Whether to return raw JSON response

Message Format

messages = [
    {"role": "system", "content": "You are a helpful assistant."},  # Optional
    {"role": "user", "content": "Hello, how are you?"},
    {"role": "assistant", "content": "I'm doing well, thank you!"}
]

Example Usage

from textxgen.endpoints.chat import ChatEndpoint

# Initialize the chat endpoint
chat = ChatEndpoint()

# Simple chat completion
messages = [{"role": "user", "content": "What is artificial intelligence?"}]
response = chat.chat(
    messages=messages,
    model="grok4.1_fast",
    temperature=0.7,
    max_tokens=100,
)
print(f"AI: {response}")

# Chat with system prompt
messages = [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "Explain quantum computing in simple terms."},
]
response = chat.chat(
    messages=messages,
    model="grok4.1_fast",
    temperature=0.7,
    max_tokens=150,
)
print(f"AI: {response}")

# Streaming chat completion
messages = [{"role": "user", "content": "Write a short story about a robot."}]
for content in chat.chat(
    messages=messages,
    model="grok4.1_fast",
    temperature=0.8,
    max_tokens=100,
    stream=True,
):
    print(content, end="", flush=True)

Completions Endpoint

The Completions Endpoint provides text completion functionality.

Parameters

Parameter Type Default Description
prompt str required Input prompt for text completion
model str "grok4.1_fast" Model identifier to use
temperature float 0.7 Sampling temperature (0.0 to 1.0)
max_tokens int 100 Maximum tokens to generate
stream bool False Whether to stream the response
stop list/str None Stop sequences to end generation
n int 1 Number of completions to generate
top_p float 1.0 Nucleus sampling parameter
raw_response bool False Whether to return raw JSON response

Example Usage

from textxgen.endpoints.completions import CompletionsEndpoint

# Initialize the completion endpoint
completions = CompletionsEndpoint()

# Simple text completion
response = completions.complete(
    prompt="Write a haiku about nature:",
    model="grok4.1_fast",
    temperature=0.7,
    max_tokens=50,
)
print(f"Completion: {response}")

# Text completion with stop sequences
response = completions.complete(
    prompt="Once upon a time,",
    model="grok4.1_fast",
    temperature=0.8,
    max_tokens=100,
    stop=["The End", "END"],
    top_p=0.9,
)
print(f"Completion: {response}")

# Streaming text completion
for content in completions.complete(
    prompt="Write a short poem about technology",
    model="grok4.1_fast",
    temperature=0.8,
    max_tokens=100,
    stream=True,
):
    print(content, end="", flush=True)

# Multiple completions with raw response
response = completions.complete(
    prompt="Give me three different ways to say 'hello':",
    model="grok4.1_fast",
    temperature=0.9,
    max_tokens=50,
    n=3,
    raw_response=True,
)
print("Raw Response:", response)

Usage

1. Chat Example

Use the ChatEndpoint to interact with chat-based models.

from textxgen.endpoints.chat import ChatEndpoint

def main():
    # Initialize the ChatEndpoint
    chat = ChatEndpoint()

    # Define the conversation messages with system prompt
    messages = [
        {"role": "system", "content": "You are a helpful AI assistant."},
        {"role": "user", "content": "What is the capital of France?"},
    ]

    # Send the chat request
    response = chat.chat(
        messages=messages,
        model="grok4.1_fast",  # Use the Grok 4.1 Fast model
        temperature=0.7,  # Adjust creativity
        max_tokens=100,   # Limit response length
    )

    # Print the response
    print("User: What is the capital of France?")
    print(f"AI: {response}")

if __name__ == "__main__":
    main()

Output:

User: What is the capital of France?
AI: The capital of France is Paris.

2. Completions Example

Use the CompletionsEndpoint to generate text completions.

from textxgen.endpoints.completions import CompletionsEndpoint

def main():
    # Initialize the CompletionsEndpoint
    completions = CompletionsEndpoint()

    # Send the completion request
    response = completions.complete(
        prompt="Write a haiku about nature:",
        model="grok4.1_fast",      # Use the Grok 4.1 Fast model
        temperature=0.7,     # Adjust creativity
        max_tokens=50,       # Limit response length
        top_p=0.9,          # Nucleus sampling
    )

    # Print the response
    print("Prompt: Write a haiku about nature:")
    print(f"Completion: {response}")

if __name__ == "__main__":
    main()

Output:

Prompt: Write a haiku about nature:
Completion: Gentle breeze whispers,
Leaves dance in golden sunlight,
Nature's quiet song.

3. Streaming Examples

Chat Streaming

from textxgen.endpoints.chat import ChatEndpoint

# Initialize the ChatEndpoint
chat = ChatEndpoint()

# Define the conversation messages with system prompt
messages = [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "Write a short story about a robot."},
]

# Send the chat request with streaming
print("User: Write a short story about a robot.")
print("AI: ", end="", flush=True)
for content in chat.chat(
    messages=messages,
    model="grok4.1_fast",
    temperature=0.8,
    max_tokens=100,
    stream=True,  # Enable streaming
):
    print(content, end="", flush=True)
print("\n")

Output:

User: Write a short story about a robot.
AI: In a bustling city of tomorrow, a small robot named Spark spent its days cleaning the streets. Unlike other robots, Spark had developed a curious habit of collecting lost items and trying to return them to their owners. One day, while cleaning a park bench, it found a small music box. As it played the melody, people gathered around, and for the first time, the city's residents saw robots not just as machines, but as beings capable of bringing joy and wonder to their lives.

Completion Streaming

from textxgen.endpoints.completions import CompletionsEndpoint

# Initialize the CompletionsEndpoint
completions = CompletionsEndpoint()

# Send the completion request with streaming
print("Prompt: Write a poem about technology")
print("Completion: ", end="", flush=True)
for content in completions.complete(
    prompt="Write a poem about technology",
    model="grok4.1_fast",
    temperature=0.8,
    max_tokens=100,
    stream=True,  # Enable streaming
):
    print(content, end="", flush=True)
print("\n")

Output:

Prompt: Write a poem about technology
Completion: In circuits deep and silicon bright,
Machines dance in digital light.
From simple tools to AI's might,
Human dreams take flight.
Each byte a story, each code a song,
In this world where we belong.

4. Listing Supported Models

Use the ModelsEndpoint to list and retrieve supported models.

from textxgen.endpoints.models import ModelsEndpoint

def main():
    """
    Example usage of the ModelsEndpoint to list and retrieve supported models.
    """
    # Initialize the ModelsEndpoint
    models = ModelsEndpoint()

    # List all supported models
    print("=== Supported Models ===")
    for model_name, display_name in models.list_display_models().items():
        print(f"{model_name}: {display_name}")

if __name__ == "__main__":
    main()

Supported Models

TextxGen currently supports the following models:

Model Name Model ID Description
Grok 4.1 Fast grok4.1_fast A fast inference version of Grok optimized for responsiveness and chat tasks.
Kat Coder Pro kat_coder_pro A coding-focused model designed for software development and debugging workflows.
Nemotron Nano 12B V2 Vision-Language nemotron_nano_12b_v2_vl NVIDIA’s multimodal model supporting both text and image understanding.
LongCat Flash Chat longcat_flash_chat A lightweight conversational model optimized for fast inference.
Qwen 3 Coder qwen3_coder A code generation model built for programming and reasoning tasks.
Kimi K2 kimi_k2 A smart conversational assistant focusing on reasoning and summarization.
DeepSeek R1 8B deepseek_r1_8b An 8B reasoning-capable language model from DeepSeek's R1 series.
Mistral Small 3.2 (24B Instruct) mistralsmall_3_24b A versatile instruction model with strong reasoning and general-purpose capabilities.
Qwen 3 (4B Parameters) qwen3_4b A compact and efficient general-purpose model.
Qwen 3 (14B Parameters) qwen3_14b A more powerful version of Qwen 3 for advanced reasoning and tasks.
DeepSeek R1-T Chimera deepseek_r1t_chimera A tuned version of DeepSeek’s reasoning model optimized for enhanced output quality.
LLaMA 4 Maverick (Instruct) llama_4_maverick Meta’s advanced instruction-tuned model designed for broad AI applications.
Gemini 2.5 Flash Lite gemini_2_5_flash_lite Google's compact Gemini model optimized for speed and efficiency.
OpenAI GPT-4.1 Nano gpt4_1_nano A lightweight GPT version offering fast inference for general tasks.
OpenAI GPT-4o Mini gpt4o_mini A performance-balanced mini version of GPT-4o supporting multiple task types.

Error Handling

TextxGen provides robust error handling for common issues:

  • Invalid Input: Raised when invalid input is provided (e.g., empty messages or prompts).
  • API Errors: Raised when the API returns an error (e.g., network issues or invalid requests).
  • Unsupported Models: Raised when an unsupported model is requested.

Example:

from textxgen.exceptions import InvalidInputError

try:
    response = chat.chat(messages=[])
except InvalidInputError as e:
    print("Error:", str(e))

Contributing

Contributions are welcome! To contribute to TextxGen:

  1. Fork the repository.
  2. Create a new branch for your feature or bugfix.
  3. Submit a pull request with a detailed description of your changes.

License

TextxGen is licensed under the MIT License. See the LICENSE file for details.


Buy Me a Coffee

If you find TextxGen useful and would like to support its development, you can buy me a coffee! Your support helps maintain and improve the project.

Buy Me A Coffee


Support

If you encounter any issues or have questions, please open an issue on the GitHub repository.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

textxgen-1.0.3.tar.gz (20.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

textxgen-1.0.3-py3-none-any.whl (19.5 kB view details)

Uploaded Python 3

File details

Details for the file textxgen-1.0.3.tar.gz.

File metadata

  • Download URL: textxgen-1.0.3.tar.gz
  • Upload date:
  • Size: 20.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for textxgen-1.0.3.tar.gz
Algorithm Hash digest
SHA256 4784b21ce5cd70c19b90987659022971106d378e6ffd4c3c024d844fa947b5ee
MD5 90ef3e93a950043ad6a027b10a7633b2
BLAKE2b-256 b19e0cf49ba46ac3dba654fe1b9c4b4b2272c69e1354cdff3068b00a44c0af0e

See more details on using hashes here.

File details

Details for the file textxgen-1.0.3-py3-none-any.whl.

File metadata

  • Download URL: textxgen-1.0.3-py3-none-any.whl
  • Upload date:
  • Size: 19.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for textxgen-1.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 50874d196dbe24444569d7542b172cdbdb86687eef7f8ad9f84be5d6720e3314
MD5 db89e4f8f7ee1e7fa944ec58d87d1f15
BLAKE2b-256 cd615ca26fe06916a5e4786bc65e4bc40a6f094523cc9c56344f5f1f4d03ca5d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page