Fast and simple AI library for text and image generation
Project description
VeloxAI
⚡ Fast and Simple AI Library - Text generation (Vtext) and Image generation (Vimg)
Installation
pip install veloxai
Quick Start
from veloxai import Vtext, Vimg
# Text Generation
vtext = Vtext(token="your-token")
response = vtext.chat("Hello, who are you?")
print(response['response'])
# Image Generation
vimg = Vimg(token="your-token")
result = vimg.create("A sunset over mountains")
print(result['images'])
Features
Vtext (Text Generation)
- ✅ Chat completions
- ✅ Streaming responses (like OpenAI)
- ✅ File attachments (images, PDFs)
- ✅ Multiple models
- ✅ Fine-tuning/training
- ✅ System prompts
Vimg (Image Generation)
- ✅ Text-to-image
- ✅ Image-to-image
- ✅ Auto wait for completion
- ✅ Manual polling
Vtext Usage
Basic Chat
from veloxai import Vtext
vtext = Vtext(token="your-token")
# Simple chat
response = vtext.chat("What is Python?")
if response['success']:
print(response['response'])
Streaming (OpenAI-style)
# Stream the response word by word
for chunk in vtext.chat("Write a long story", stream=True):
print(chunk, end='', flush=True)
Chat with Files
# Attach an image
response = vtext.chat(
message="What's in this image?",
file="photo.jpg"
)
# Attach a PDF
response = vtext.chat(
message="Summarize this document",
file="document.pdf"
)
Use Different Models
# List available models
models = vtext.models()
print(models['models'])
# Use specific model
response = vtext.chat(
message="Write a poem",
model="gpt-4"
)
System Prompts
response = vtext.chat(
message="Tell me about yourself",
system="You are a helpful Python tutor"
)
OpenAI-Compatible Completion
response = vtext.completion(
prompt="Once upon a time",
max_tokens=100,
temperature=0.7,
stream=False
)
Fine-Tuning (Personal Training)
# Train the AI
vtext.train(
identity="Expert Python programmer",
role="Help users write better code",
extra="Always provide examples"
)
# Now chats use your training
response = vtext.chat("How do I use decorators?")
# Check current training
training = vtext.get_training()
print(training['training'])
# Clear training
vtext.clear_training()
Vimg Usage
Basic Image Generation
from veloxai import Vimg
vimg = Vimg(token="your-token")
# Generate and wait for completion
result = vimg.create("A beautiful sunset over mountains")
if result['success']:
for url in result['images']:
print(f"Image: {url}")
Manual Control
# Start generation (don't wait)
result = vimg.generate("A futuristic city")
record_id = result['record_id']
# Check status manually
status = vimg.status(record_id)
print(status['status']) # PENDING, PROCESSING, DONE, or FAILED
# Wait for completion
final = vimg.wait(record_id, timeout=180)
print(final['images'])
Image-to-Image
# Generate based on existing image
result = vimg.create(
prompt="Make it look like a watercolor painting",
init_image="photo.jpg"
)
Quick Generation (No Wait)
# Start generation and return immediately
result = vimg.create(
prompt="A dragon",
wait=False
)
# Returns record_id for later checking
record_id = result['record_id']
Complete Examples
Example 1: Simple Chatbot
from veloxai import Vtext
vtext = Vtext(token="your-token")
while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
break
response = vtext.chat(user_input)
if response['success']:
print(f"AI: {response['response']}\n")
Example 2: Streaming Chat
from veloxai import Vtext
vtext = Vtext(token="your-token")
print("AI: ", end='')
for chunk in vtext.chat("Tell me a story", stream=True):
print(chunk, end='', flush=True)
print()
Example 3: Image Generation with Progress
from veloxai import Vimg
import time
vimg = Vimg(token="your-token")
# Start generation
result = vimg.generate("A beautiful landscape")
if not result['success']:
print(f"Error: {result['error']}")
exit()
record_id = result['record_id']
print(f"Generation started: {record_id}")
# Poll for completion
while True:
status = vimg.status(record_id)
if status['status'] == 'DONE':
print("\nComplete!")
for i, url in enumerate(status['images'], 1):
print(f"Image {i}: {url}")
break
elif status['status'] == 'FAILED':
print("\nFailed!")
break
print(".", end='', flush=True)
time.sleep(3)
Example 4: Multi-Modal Chat
from veloxai import Vtext
vtext = Vtext(token="your-token")
# Analyze an image
response = vtext.chat(
message="What objects are in this image?",
file="room.jpg"
)
print(response['response'])
# Follow-up question
response = vtext.chat("What colors are dominant?")
print(response['response'])
API Reference
Vtext
__init__(token, base_url=None)
Initialize Vtext client.
chat(message, model="perplexity-ai", system=None, file=None, stream=False)
Send a chat message.
- Returns:
dictorIterator[str]if streaming
completion(prompt, model, max_tokens, temperature, stream)
OpenAI-style completion.
models()
Get available models.
train(identity, role, extra="")
Set fine-tuning.
get_training()
Get current training.
clear_training()
Clear training.
Vimg
__init__(token, base_url=None)
Initialize Vimg client.
generate(prompt, init_image=None)
Start image generation.
- Returns:
{"success": bool, "record_id": str}
status(record_id)
Check generation status.
- Returns:
{"status": str, "images": [str]}
wait(record_id, timeout=180, interval=3)
Wait for completion.
- Returns: Final status with images
create(prompt, init_image=None, wait=True)
Generate and optionally wait.
- Returns: Images if
wait=True, elserecord_id
Advanced Usage
Environment Variables
import os
from veloxai import Vtext
token = os.getenv('VELOX_TOKEN')
vtext = Vtext(token=token)
Error Handling
response = vtext.chat("Hello")
if response['success']:
print(response['response'])
else:
print(f"Error: {response['error']}")
Custom Base URL
vtext = Vtext(
token="your-token",
base_url="https://custom-api.example.com"
)
Streaming with Error Handling
try:
for chunk in vtext.chat("Hello", stream=True):
if chunk.startswith("Error:"):
print(f"Stream error: {chunk}")
break
print(chunk, end='')
except Exception as e:
print(f"Exception: {e}")
Integration Examples
Flask API
from flask import Flask, request, jsonify, Response
from veloxai import Vtext
app = Flask(__name__)
vtext = Vtext(token="your-token")
@app.route('/chat', methods=['POST'])
def chat():
message = request.json['message']
stream = request.json.get('stream', False)
if stream:
def generate():
for chunk in vtext.chat(message, stream=True):
yield f"data: {chunk}\n\n"
return Response(generate(), mimetype='text/event-stream')
else:
response = vtext.chat(message)
return jsonify(response)
@app.route('/image', methods=['POST'])
def image():
prompt = request.json['prompt']
vimg = Vimg(token="your-token")
result = vimg.create(prompt)
return jsonify(result)
if __name__ == '__main__':
app.run(debug=True)
Discord Bot
import discord
from veloxai import Vtext
vtext = Vtext(token="your-velox-token")
bot = discord.Client()
@bot.event
async def on_message(message):
if message.author == bot.user:
return
if message.content.startswith('!ask'):
question = message.content[5:]
response = vtext.chat(question)
if response['success']:
await message.channel.send(response['response'])
bot.run('your-discord-token')
Async Wrapper
import asyncio
from veloxai import Vtext
async def async_chat(message):
vtext = Vtext(token="your-token")
loop = asyncio.get_event_loop()
response = await loop.run_in_executor(
None,
vtext.chat,
message
)
return response
# Usage
response = asyncio.run(async_chat("Hello"))
Comparison with OpenAI
OpenAI
from openai import OpenAI
client = OpenAI(api_key="key")
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}],
stream=True
)
for chunk in response:
print(chunk.choices[0].delta.content, end='')
VeloxAI (Similar API)
from veloxai import Vtext
vtext = Vtext(token="key")
for chunk in vtext.chat("Hello", stream=True):
print(chunk, end='')
Cleaner and simpler! ✨
Requirements
- Python 3.7+
- requests >= 2.25.0
License
MIT License - see LICENSE file
Support
- GitHub: https://github.com/yourusername/veloxai
- Issues: https://github.com/yourusername/veloxai/issues
- PyPI: https://pypi.org/project/veloxai/
Changelog
1.0.0 (2024-02-08)
- Initial release
- Vtext: Chat, streaming, file support, training
- Vimg: Generation, img2img, auto-wait
- OpenAI-compatible API
Credits
Made with ⚡ by VeloxAI Team
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file veloxai-1.0.0.tar.gz.
File metadata
- Download URL: veloxai-1.0.0.tar.gz
- Upload date:
- Size: 10.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5405956f98cbe2c3980b84af90b36861ce6b4f0b4234afb38018ac8b689460b9
|
|
| MD5 |
3405fe9555114ffb232f0ca1e4a49575
|
|
| BLAKE2b-256 |
643ce8a4de29dacf4361a50c52976edbd70bfeda060964cd1fb92ceefd561035
|
File details
Details for the file veloxai-1.0.0-py3-none-any.whl.
File metadata
- Download URL: veloxai-1.0.0-py3-none-any.whl
- Upload date:
- Size: 7.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f1e88e549dbfdc1ed1272979f77bd572e8d252711d77dceb373436c51bd5b6de
|
|
| MD5 |
dfca4468e089e53595594b3b9ce60f50
|
|
| BLAKE2b-256 |
0df9214fe485e3736ab82df239ca6630276b69e3152323abe74b0f0c53b84cfb
|