A simple Python library for Google Gemini with memory support.
Project description
Dracula 🧛
A simple, elegant Python library for Google Gemini with powerful features. Built for developers who want to integrate AI into their projects without dealing with complex API setup.
Installation
pip install dracula-ai
Quick Start
from dracula import Dracula
from dotenv import load_dotenv
import os
load_dotenv()
ai = Dracula(api_key=os.getenv("GEMINI_API_KEY"))
response = ai.chat("Hello, who are you?")
print(response)
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| api_key | str | required | Your Google Gemini API key |
| model | str | gemini-3-flash-preview | Gemini model name |
| max_messages | int | 10 | Maximum number of messages to remember |
| prompt | str | "You are a helpful assistant." | System prompt |
| temperature | float | 1.0 | Response creativity (0.0 - 2.0) |
| max_output_tokens | int | 8192 | Maximum response length |
| stats_filepath | str | "dracula_stats.json" | Path to save usage stats |
| language | str | "English" | Language for responses |
Features
💬 Text Chat
The most basic feature of Dracula. Send a message to Gemini and get a response back. Every message you send and every response you receive is automatically stored in memory, so Gemini always knows the context of your conversation.
ai = Dracula(api_key="your-api-key")
response = ai.chat("What is Python?")
print(response)
🌊 Streaming
Normally, Dracula waits for Gemini to finish generating the full response before returning it. Streaming changes this behavior — instead of waiting, you receive the response word by word as it is being generated, just like ChatGPT does. This is especially useful for long responses or when you want a more interactive feel in your app.
for chunk in ai.stream("Tell me a long story."):
print(chunk, end="", flush=True)
🧠 Conversation Memory
Dracula automatically remembers the conversation history so Gemini can refer back to previous messages. For example, if you tell it your name in one message, it will remember it in the next. You can control how many messages are remembered with the
max_messagesparameter. When you want to start a completely fresh conversation, useclear_memory().
ai.chat("My name is Ahmet.")
response = ai.chat("What is my name?")
print(response) # It remembers! ✅
ai.clear_memory() # Wipe memory
💾 Save & Load History
By default, conversation history only exists while your program is running. Once you stop the program, the history is lost. Save & Load History solves this by letting you save the conversation to a JSON file and reload it later, so your AI can continue right where it left off — even in a completely new run of your program.
ai.save_history("conversation.json")
# Later, in a new run of your program:
ai.load_history("conversation.json")
📜 Pretty Print History
get_history()returns the raw conversation history as a list of dictionaries, which can be hard to read.print_history()formats the same data into a clean, human-readable layout with clear labels for each message, making it much easier to follow the conversation at a glance.
ai.print_history()
🎭 System Prompt
The system prompt is a set of instructions you give to Gemini before the conversation starts. It defines the AI's personality, role, and behavior for the entire conversation. For example, you can tell it to act as a pirate, a chef, a formal assistant, or anything else you can imagine. The user will never see this prompt — it works silently in the background.
ai = Dracula(
api_key="your-api-key",
prompt="You are a pirate who answers everything dramatically."
)
# You can also change it anytime during the conversation:
ai.set_prompt("You are now a formal assistant.")
🌡️ Temperature Control
Temperature controls how creative and random Gemini's responses are. A low temperature (close to 0.0) makes responses more focused, predictable, and factual — great for technical questions. A high temperature (close to 2.0) makes responses more creative, surprising, and varied — great for storytelling or brainstorming. The default value of 1.0 is a balanced middle ground.
ai = Dracula(api_key="your-api-key", temperature=0.2) # Focused
ai = Dracula(api_key="your-api-key", temperature=1.8) # Creative
# You can also change it anytime:
ai.set_temperature(0.5)
📏 Max Output Tokens
Tokens are small chunks of text — roughly one token per word.
max_output_tokenscontrols the maximum length of Gemini's responses. If you want short, concise answers set it low. If you want long, detailed responses set it high. The default is 8192 which is large enough for most use cases.
ai = Dracula(api_key="your-api-key", max_output_tokens=256) # Short responses
ai = Dracula(api_key="your-api-key", max_output_tokens=8192) # Long responses
# You can also change it anytime:
ai.set_max_output_tokens(512)
🌍 Response Language
By default Gemini responds in whatever language the user writes in. The language feature overrides this behavior and forces Gemini to always respond in a specific language, regardless of what language the user writes in. This is useful for apps targeting a specific audience or for language learning tools.
ai = Dracula(api_key="your-api-key", language="Turkish")
response = ai.chat("Hello!")
print(response) # Merhaba! ✅
# You can also change it anytime:
ai.set_language("Spanish")
📊 Usage Stats
Dracula automatically tracks how many messages you've sent and received, and how many characters were exchanged in total. These stats are saved to a JSON file and persist across sessions, so they accumulate over time. This is useful for monitoring your API usage or just satisfying your curiosity about how much you've chatted with your AI.
print(ai.get_stats())
# {
# "total_messages": 5,
# "total_responses": 5,
# "total_characters_sent": 120,
# "total_characters_received": 3400
# }
ai.reset_stats() # Reset back to zero
🔗 Chainable Methods
Instead of calling each setter method on a separate line, chainable methods let you combine multiple settings into a single, clean line of code. This works because each setter method returns the Dracula object itself after making the change, allowing you to immediately call another method on it.
# Without chaining:
ai.set_prompt("You are a chef.")
ai.set_temperature(0.9)
ai.set_language("Turkish")
# With chaining — same result, much cleaner:
ai.set_prompt("You are a chef.").set_temperature(0.9).set_language("Turkish")
🧹 Context Manager
A context manager lets you use Dracula with Python's
withstatement. The benefit is automatic cleanup — when thewithblock ends, Dracula automatically clears the memory and resets the stats, even if an error occurred inside the block. This is the cleanest and safest way to use Dracula, especially in larger applications.
with Dracula(api_key="your-api-key") as ai:
ai.chat("Hello!")
ai.print_history()
# Memory and stats automatically reset here ✅
Error Handling
Dracula provides custom exceptions so you can handle different types of errors separately and give your users clear, meaningful error messages instead of confusing Python crashes.
from dracula import ValidationException, ChatException, InvalidAPIKeyException
try:
ai = Dracula(api_key="", temperature=5.0)
except ValidationException as e:
print(f"Validation error: {e}")
except InvalidAPIKeyException as e:
print(f"API key error: {e}")
except ChatException as e:
print(f"Chat error: {e}")
🎭 Role Playing Mode
Dracula comes with a set of built-in personas that you can switch between instantly. Each persona has its own predefined prompt, temperature, and language settings. You can also create your own custom persona by simply using
set_prompt()andset_temperature()together.
# List all available personas
print(ai.list_personas())
# ['assistant', 'pirate', 'chef', 'shakespeare', 'scientist', 'comedian']
# Switch to a persona instantly
ai.set_persona("pirate")
print(ai.chat("Hello, who are you?"))
# Arrr, I be a fearsome pirate of the seven seas! 🏴☠️
# Works with chaining too!
ai.set_persona("comedian").chat("Why is Python the best language?")
🖥️ CLI Tool
Dracula comes with a built-in CLI tool that lets you chat with Gemini directly from the terminal without writing any Python code. This is useful for quick questions, testing, or just having fun. Make sure your GEMINI_API_KEY is set in your .env file or environment variables before using it.
# Basic chat
dracula chat "Hello, who are you?"
# Chat with a persona
dracula chat "Tell me a joke" --persona comedian
# Chat in a different language
dracula chat "Merhaba" --language Turkish
# Stream the response word by word
dracula chat "Tell me a long story" --stream
# Change temperature
dracula chat "Write a poem" --temperature 1.8
# List all available personas
dracula list-personas
# Show usage stats
dracula stats
# Reset usage stats
dracula clear-stats
# Check version
dracula --version
🖥️ Desktop Chat UI
Dracula comes with a ready-made PyQt6 desktop chat UI that you can use in your Windows apps. It supports dark and light themes, markdown rendering, syntax highlighting for code blocks, and runs Gemini requests in a background thread so the UI never freezes. You can customize the title, subtitle, and theme to match your project.
from dracula import Dracula, launch
from dotenv import load_dotenv
import os
load_dotenv()
ai = Dracula(
api_key=os.getenv("GEMINI_API_KEY"),
prompt="You are a helpful assistant.",
language="English"
)
# Dark theme
launch(ai, title="My AI App", theme="dark")
# Light theme
launch(ai, title="My AI App", theme="light")
Getting Your API Key
- Go to https://aistudio.google.com
- Sign in with your Google account
- Click "Get API Key"
- Copy your key and store it safely in a
.envfile
License
MIT License — feel free to use this in your own projects!
Author
Suleyman Ibis
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dracula_ai-0.5.2.tar.gz.
File metadata
- Download URL: dracula_ai-0.5.2.tar.gz
- Upload date:
- Size: 18.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
14408e09e970d182dad73ed981d979654144f04689a9359da2935d1a22bca96e
|
|
| MD5 |
fae395fb0bc9100e97a36b28e97cc912
|
|
| BLAKE2b-256 |
5f347c25fc05fe47030678ef5e69847952bb84f02025e4684bb67ce7ada97514
|
File details
Details for the file dracula_ai-0.5.2-py3-none-any.whl.
File metadata
- Download URL: dracula_ai-0.5.2-py3-none-any.whl
- Upload date:
- Size: 17.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a47c479952a6874dba0547dc0d16dff23bbbf9bd7642b09e56eb9fcbbd3ce45a
|
|
| MD5 |
082b5668f7054419b50f658472b2d339
|
|
| BLAKE2b-256 |
34959363db36661cb8af166613c783abf6e5f9b10557a3a613a77370d1ea0529
|