Skip to main content

Multisource sentiment-focused research assistant (Reddit, Bluesky, optional Web).

This project has been archived.

The maintainers of this project have marked this project as archived. No new releases are expected.

Project description

Vociro — Multisource Research Assistant

A command-line helper that spins up one or more autonomous search agents powered by OpenAI (o3 or o4-mini). Each agent can gather information from:

  • DuckDuckGo Web Search (HTML scrape)
  • Reddit posts (plus top comments)
  • Bluesky posts

The collected evidence is summarised and passed to a report compiler model that produces the final analysis. If the compiler feels the results are insufficient, it can request another search round via an internal redo_search tool.

Why?

Quickly answer exploratory questions that benefit from perspectives across traditional web pages, social-media discussion (Reddit) and emerging networks (Bluesky) without juggling multiple APIs or manual browsing.

Architecture

┌──────────────┐   1. strategy_model (o3/o4-mini)
│ generate     │      • Produces 3-8 search queries
│ search       │
│ objectives   │
└──────┬───────┘
       │queries[]
┌──────▼───────┐   2. N search agents (agent_model)
│ each agent   │      • Picks one query
│ uses tools   │      • Calls search_web / search_reddit / search_bsky
└──────┬───────┘
       │summaries[]
┌──────▼───────┐   3. report_model
│ compile      │      • Writes final report
│ final report │      • May call redo_search to loop back
└──────────────┘

Installation

# (optional) create and activate a virtual environment
python -m venv .venv && source .venv/bin/activate

# install from PyPI
pip install vociro

Environment variables

Set the following variables in your terminal session before running Vociro (no .env file is used):

Variable Purpose
OPENAI_API_KEY Your OpenAI key (mandatory)
REDDIT_CLIENT_ID & REDDIT_CLIENT_SECRET Reddit app credentials
BLUESKY_HANDLE & BLUESKY_APP_PASSWORD Bluesky login (optional – improves rate-limits)

Examples (Unix shells):

export OPENAI_API_KEY="sk-..."
export REDDIT_CLIENT_ID="abc" REDDIT_CLIENT_SECRET="xyz"

Windows (PowerShell):

setx OPENAI_API_KEY "sk-..."

Usage

vociro init  # start an interactive research session
  1. Clarification phase — the assistant asks follow-up questions until it proposes a final objective:

    READY: <concise objective>
    

    You must then confirm with y (accept) or n (explain why, loop continues). Press z at any prompt to skip the phase entirely.

  2. Source selection
    • Reddit and Bluesky are always enabled (sentiment sources).
    • DuckDuckGo Web search is optional (default n).

  3. Model selection / number of agents — same as before.

During execution you will see, for each generated search query:

Search — <query>
Tool calls:
  1. search_reddit(query='…')
  2. search_bsky(query='…')
  …
Total cost so far: $0.0123

The agent is encouraged to perform deep dives (many tool calls) on Reddit and Bluesky to surface real user sentiment. The report compiler will call redo_search automatically if it feels more evidence is required.

Skip everything quickly

If you want a totally non-interactive run you can feed inputs through stdin, e.g.

echo -e "My question\nz\n\n\no4-mini\no3\n" | vociro init | cat

(The first z skips clarifications.)

Extending functionality

  • Add new search back-ends by:
    1. Implementing a simple Python function that returns JSON-serialisable results.
    2. Registering a matching schema in build_tool_specs().
    3. Handling the tool call in execute_tool().
  • All OpenAI calls are centralised, so adding caching or async batching is straightforward.

Caveats

  • DuckDuckGo HTML scraping is brittle and for light personal use only.
  • The project is not production-grade: no rate-limit back-off, retries or robust error handling.
  • Token counts rely on the usage field from the OpenAI response and may vary slightly from billing.

License

MIT – do what you like, just don't blame me.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vociro-0.1.1.tar.gz (18.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vociro-0.1.1-py3-none-any.whl (25.0 kB view details)

Uploaded Python 3

File details

Details for the file vociro-0.1.1.tar.gz.

File metadata

  • Download URL: vociro-0.1.1.tar.gz
  • Upload date:
  • Size: 18.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.9

File hashes

Hashes for vociro-0.1.1.tar.gz
Algorithm Hash digest
SHA256 0d4ba27d573b5a57623cb7c72276bac37eaae644a947b5b1364b6bf1adaca4b1
MD5 4ad2fddc1cbda049cc190f9e7647b1fb
BLAKE2b-256 efab2051370eb8e722ca6f2f72907483fcaf9865a738853dd4e180067ef91f15

See more details on using hashes here.

File details

Details for the file vociro-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: vociro-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 25.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.9

File hashes

Hashes for vociro-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4ead8f07b8b5431435479699b7b5e6855049aa6357df5cb3cdfa339208ee9c35
MD5 ec9b336846f62b2c801647d3051349ae
BLAKE2b-256 a668a270a93124d3fcc1a525425a6f5755f2b25ec9a9dc76cfea7c161ac10139

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page