A lightweight Python wrapper for Ollama AI models.
Project description
EZollama
A simple Python library for interacting with Ollama models via their local API and cloud models through Google, Groq, OpenAI and Anthropic.
Supports model selection, chatting, persistent system prompts, listing models, downloading models, resetting chat history, and text-to-speech.
Installation
-
Install Ollama:
Download and install Ollama from https://ollama.com/download.
The library will prompt to install Ollama if not found. -
Python dependencies:
The library auto-installspyttsx3for text-to-speech if missing.
Usage for local
Import
from ezollama import EzOllama
ez = EzOllama()
Set Model
ez.set_model("llama2")
Set Persistent System Prompt
ez.set_system_prompt("You are a helpful assistant.")
Chat
response = ez.chat("Hello!")
print(response)
List Available Models
models = ez.list_models()
print(models)
Pull (Download) a Model
ez.pull_model("llama2")
Reset Chat History
ez.reset_history()
Text-to-Speech
ez.text_to_speech("Hello, this is AI speaking.")
Usage for cloud
Import
from ezollama import EzOllama
ez = EzOllama()
Set mode
ez.set_mode("mode", "api-key") # choosable from groq, google, anthropic and openai
Set model
ez.set_model("model") # for example 'gemini-2.5-flash
Set Persistent System Prompt
ez.set_system_prompt("You are a helpful assistant.")
Example for local
from ezollama import EzOllama
ez = EzOllama()
ez.set_model("llama3.2:3b")
ez.set_system_prompt("You are a friendly assistant.")
while True:
user_input = input("- ")
resp = ez.chat(user_input)
print(resp)
ez.text_to_speech(resp)
Example for cloud
from ezollama import EzOllama
ez = EzOllama()
ez.set_mode("google" "API-KEY")
ez.set_model("gemini-2.5-flash")
ez.set_system_prompt("You are a friendly assistant.")
while True:
user_input = input("- ")
resp = ez.chat(user_input)
print(resp)
Notes
- The library checks and quietly starts the Ollama server before each API call.
- If Ollama is not installed, you will be prompted to install it.
- If the model does not exist,
pull_modelwill print a message. - Text-to-speech uses
pyttsx3and works cross-platform.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ezollama-0.1.9.tar.gz.
File metadata
- Download URL: ezollama-0.1.9.tar.gz
- Upload date:
- Size: 4.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
713e29cb3f9c00bcc0b6d9dfb0a2a741722a70e8766d0ba94190d936ac8cb4ef
|
|
| MD5 |
2167a16470f10d7f642c1184c54824a8
|
|
| BLAKE2b-256 |
de1fbff81989c372689de7411cb6be0a819d76b378af5c4da491a54dd03c944e
|
File details
Details for the file ezollama-0.1.9-py3-none-any.whl.
File metadata
- Download URL: ezollama-0.1.9-py3-none-any.whl
- Upload date:
- Size: 5.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3acade3e03642d8f9dc5c3f12c1581531b8a527873ce0c3c1ee2a5f79ea35cdd
|
|
| MD5 |
825eeeda9172ea61a022057b63105e79
|
|
| BLAKE2b-256 |
51ed8a865c124a56f0bb414e4b34b35d814c396805a6073913ceb01f26d0740d
|