Skip to main content

Multi-market OHLCV data SDK with AI context builder and LLM integration for Vietnamese stocks, US stocks, crypto, and commodities

Project description

aipriceaction

Live site: aipriceaction.com | GitHub: aipriceaction | Frontend: aipriceaction-web | Docker image: quanhua92/aipriceaction:latest | Python SDK: aipriceaction on PyPI

Python SDK for AIPriceAction — OHLCV data access and AI context builder for multi-market investment analysis. Reads from a public S3 archive (no API credentials needed).

Install

pip install aipriceaction

Data Sources

The SDK reads OHLCV data from an S3-compatible archive. All sources are auto-detected from ticker metadata — no need to specify which market a ticker belongs to.

Source Examples Intervals
Vietnamese stocks (VCI) VCB, FPT, VNINDEX 1m, 1h, 1D
US / international stocks (Yahoo) AAPL, GOOGL, GC=F 1m, 1h, 1D
Cryptocurrency (Binance) BTCUSDT, ETHUSDT 1m, 1h, 1D
SJC gold SJC-GOLD 1D

Quick Start

from aipriceaction import AIPriceAction

client = AIPriceAction()

# Ticker metadata
tickers = client.get_tickers()            # all tickers
tickers = client.get_tickers(source="vn") # filter by source

# OHLCV data as DataFrame
df = client.get_ohlcv("VCB", interval="1D")                         # VN stock
df = client.get_ohlcv("AAPL", interval="1D")                        # US stock
df = client.get_ohlcv("BTCUSDT", interval="1D")                     # crypto
df = client.get_ohlcv(tickers=["VCB", "FPT", "BTCUSDT"], interval="1D")  # mixed

# Date range, limit, MA indicators
df = client.get_ohlcv("VCB", start_date="2025-01-01", end_date="2025-04-30", ma=True)
df = client.get_ohlcv("VCB", interval="1D", limit=100, ema=True)    # EMA instead of SMA

Override the S3 endpoint if self-hosting:

client = AIPriceAction(base_url="https://your-s3-endpoint/archive")

Data is cached to disk by default (temp dir). Set cache_dir for persistent caching:

client = AIPriceAction(cache_dir="./cache")

Timezone

All OHLCV data is stored in UTC+0. By default, the SDK converts timestamps to UTC+7 (ICT, Vietnam timezone) for display. Pass utc_offset=0 to keep raw UTC, or any integer hour offset:

client = AIPriceAction(utc_offset=0)       # keep raw UTC
client = AIPriceAction(utc_offset=9)       # UTC+9 (JST/KST)
client = AIPriceAction(utc_offset=-5)      # UTC-5 (EST)

Live Data

By default the SDK reads from an S3 archive which may be stale by minutes to hours. Enable use_live=True to overlay live data from the REST API on top of S3 data:

client = AIPriceAction(use_live=True)
df = client.get_ohlcv("VCB", interval="1D", limit=5, ma=False)

When enabled, for native intervals (1D, 1h, 1m) the SDK:

  • Fetches live data from the REST API (https://api.aipriceaction.com by default)
  • Overwrites the last candle(s) from S3 with live data
  • Appends any newer candles not yet in the archive
  • Falls back to S3-only data if the live API is unreachable

Live responses are cached in memory for 120 seconds to avoid redundant API calls. On API failure, stale cached data is returned if available.

Direct live data access

Use fetch_live_data() to get the raw live API response (all tickers, latest bar) without S3 — useful for market snapshots:

data = client.fetch_live_data("1D")  # {"VCB": [{"time": ..., "open": ..., ...}], ...}

Returns a dict mapping ticker symbol to a list of candle dicts. Cached in memory for 120 seconds.

Point to a self-hosted instance with live_url:

client = AIPriceAction(
    base_url="https://your-s3-endpoint/archive",
    use_live=True,
    live_url="https://your-api-instance.com",
)

AI Context Builder

Build structured context strings for LLM-powered investment analysis. Accepts the same utc_offset parameter as AIPriceAction (default 7 = UTC+7).

from aipriceaction import AIContextBuilder

builder = AIContextBuilder(lang="en", utc_offset=7)  # default UTC+7

# Single ticker (VNINDEX included as reference by default)
context = builder.build(ticker="VCB", interval="1D")

# Multi ticker
context = builder.build(tickers=["VCB", "FPT", "TCB"], interval="1D")

# All tickers for a source (uses live API when limit=1 for speed)
context = builder.build(source="vn", interval="1D", limit=1, reference_ticker=None, include_system_prompt=False)

# No data — system prompt + disclaimer only
context = builder.build()

# Omit VNINDEX reference
context = builder.build(ticker="VCB", interval="1D", reference_ticker=None)

Browse Question Bank

for q in builder.questions("single"):
    print(f"{q['title']}: {q['snippet']}")

Ask LLM

Requires OPENAI_API_KEY. The context is built once and reused across answer() calls for KV cache efficiency.

builder.build(ticker="VCB", interval="1D")

response = builder.answer("What is the current trend?")
follow_up = builder.answer("What is the support level?")  # faster, KV cache hit

Configuration

Set via environment variables or .env file:

Variable Default Description
OPENAI_API_KEY "" API key for LLM calls
OPENAI_BASE_URL https://openrouter.ai/api/v1 LLM API endpoint
OPENAI_MODEL openai/gpt-oss-20b Default LLM model
AI_CONTEXT_LANG en Context language (en or vi)

OpenRouter Models

Curated free-tier models available via OpenRouter:

from aipriceaction.llm_models import OpenRouter

for m in OpenRouter.FREE:
    print(f"{m.id}{m.label}")

Examples

Build context

from aipriceaction import AIContextBuilder

builder = AIContextBuilder(lang="en")

# Single ticker — prints questions, then full context
builder.build(ticker="VCB", interval="1D")
print(builder._last_context)

Multi-ticker context

builder.build(tickers=["VCB", "FPT", "TCB"], interval="1D")
print(builder._last_context)

System prompt only (no market data)

context = builder.build()
print(context)

Build context + call LLM

Build once, ask multiple questions — the second call is faster due to LLM KV cache.

from aipriceaction import AIContextBuilder

builder = AIContextBuilder(lang="en")

# Build context once
builder.build(ticker="VCB", interval="1D")

# First question (cold)
response1 = builder.answer("What is the current trend?")

# Follow-up (warm — same context prefix, KV cache hit)
response2 = builder.answer("What is the support level?")

Multi-timeframe analysis

Switch timeframe between questions. Pass previous responses as history so the LLM can cross-reference timeframes.

from aipriceaction import AIContextBuilder

builder = AIContextBuilder(lang="en")

# Daily context — big picture
builder.build(ticker="VIC", interval="1D")
daily_response = builder.answer("What is the weekly trend?")

# Hourly context — intraday detail, with daily analysis as history
builder.build(ticker="VIC", interval="1h")
hourly_response = builder.answer(
    "Confirm or reject the daily trend using intraday data.",
    history=[daily_response],
)

More examples in examples/:

Example Description
single_ticker.py Build context for one ticker
multi_ticker.py Build context for multiple tickers
multi_timeframe.py Multi-timeframe: daily + hourly with history
reference_ticker.py Context with VNINDEX reference
llm_question.py Build context + call LLM
system_prompt_only.py System prompt without ticker data
langchain_agent.py LangChain ReAct agent with AIContextBuilder and tool-calling
multi_agent.py Multi-agent parallel sector research with LangGraph Send()

System Prompt

get_system_prompt() builds the AIPriceAction system prompt from composable sections. Use the bool flags to customize which sections are included — useful when different agents in a multi-agent pipeline need different prompts.

from aipriceaction.system import get_system_prompt

# Full prompt — data-analyzing agents (workers)
prompt = get_system_prompt("en")

# Skip strict data policy — aggregator works with text, not raw data
prompt = get_system_prompt("en", include_data_policy=False)

# Format only — writer doesn't analyze or validate data
prompt = get_system_prompt("en",
    include_data_policy=False,
    include_analysis_framework=False,
)

Parameters

Parameter Default Description
lang Language: "en" or "vn"
ma_type "ema" Moving average type: "ema" or "sma"
include_ma_score True Include MA Score explanation section
include_disclaimer True Include investment disclaimer section
include_data_policy True Include strict data-usage rules (never hallucinate, ask user to paste data). Set False for agents working with text from other agents
include_analysis_framework True Include VPA/Wyckoff analysis priorities. Set False for formatting/writer agents

Prompt Sections

The assembled prompt includes these sections (when enabled):

Section Controlled by Description
Identity Always Branding, language instruction, expertise areas
Data Policy include_data_policy Rules for using only provided data, no hallucination
Analysis Framework include_analysis_framework Chart context description, VPA/Wyckoff priorities
Communication Style Always Output format, objectivity, disclaimers
MA Score Explanation include_ma_score How MA Score is calculated and interpreted
Investment Disclaimer include_disclaimer Legal disclaimer about investment risks

Use get_system_prompt_with_ticker_info() for single-ticker contexts that display ticker metadata in the analysis framework.

LangChain Agent

Build a ReAct agent with VNINDEX context from AIContextBuilder and tool-calling to research tickers. Use include_system_prompt=False to get market data only (the system prompt goes in system_prompt= to avoid duplication), and tools use the same builder for consistent formatting.

from langchain.agents import create_agent
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver

from aipriceaction import AIPriceAction, AIContextBuilder
from aipriceaction.settings import settings
from aipriceaction.system import get_system_prompt

LANG = settings.ai_context_lang
_client = AIPriceAction()
_builder = AIContextBuilder(lang=LANG)

@tool
def get_ohlcv_data(ticker: str, interval: str = "1D", limit: int = 30) -> str:
    """Fetch OHLCV data for a ticker with MA indicators."""
    try:
        return _builder.build(ticker=ticker, interval=interval, limit=limit,
                             reference_ticker=None, include_system_prompt=False)
    except Exception as e:
        return f"Error fetching {ticker}: {e}"

@tool
def get_ticker_list(source: str | None = None) -> str:
    """List available ticker symbols and metadata."""
    tickers = _client.get_tickers(source=source)
    return "\n".join(f"{t.ticker} ({t.source})" for t in tickers)

initial_context = _builder.build(ticker="VNINDEX", interval="1D",
                                limit=10, include_system_prompt=False)

llm = ChatOpenAI(api_key=settings.openai_api_key,
                 base_url=settings.openai_base_url,
                 model=settings.openai_model)

AGENT_INSTRUCTIONS = """
You have tools to fetch OHLCV data and list available tickers.
Research workflow (MANDATORY):
1. Call get_ohlcv_data for each ticker explicitly mentioned in the question.
2. Call get_ticker_list to discover tickers in the same sectors.
3. Call get_ohlcv_data for at least 2-3 additional tickers per sector.
4. Provide per-ticker analysis, sector rotation observations, and ranking table.
"""

agent = create_agent(
    llm,
    [get_ticker_list, get_ohlcv_data],
    checkpointer=MemorySaver(),
    system_prompt=get_system_prompt(LANG) + "\n\n" + AGENT_INSTRUCTIONS,
)

for event in agent.stream(
    {"messages": [{"role": "user",
                   "content": f"{initial_context}\n\nResearch VIC, STB, SSI and related tickers."}]},
    config={"configurable": {"thread_id": "demo"}},
    stream_mode="updates",
):
    ...

See examples/langchain_agent.py for the full example.

Multi-Agent

Build a multi-agent system with LangGraph Send() for parallel sector research.

START → fetch market snapshot → [supervisor] → Send() fan-out → [worker agents x N] → [aggregator] → [writer] → END

The supervisor receives a full market snapshot (latest bar for all VN tickers via the live API) and uses it to pick the most relevant sectors and tickers for each worker. Workers and the aggregator also receive the snapshot as context for cross-referencing. Each node runs a specialized role with a tailored system prompt via get_system_prompt() bool flags:

from langchain.agents import create_agent
from langchain_core.messages import SystemMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import START, END, StateGraph, add_messages
from langgraph.types import Send

from aipriceaction import AIPriceAction, AIContextBuilder
from aipriceaction.settings import settings
from aipriceaction.system import get_system_prompt

LANG = settings.ai_context_lang

# Workers — full system prompt (data policy + analysis framework)
# They fetch real data via tools and produce per-sector analysis.
worker = create_agent(
    llm,
    [get_ticker_list, get_ohlcv_data],
    system_prompt=get_system_prompt(LANG) + "\n\n" + worker_instructions,
)

# Aggregator — synthesize worker reports, skip strict data policy
# since it works with text from workers, not raw market data.
agg_sys = get_system_prompt(LANG, include_data_policy=False)
llm.invoke([
    SystemMessage(content=agg_sys + "\n\n" + aggregator_instructions),
    HumanMessage(content=sector_reports),
])

# Writer — format only, skip data policy and analysis framework
writer_sys = get_system_prompt(
    LANG, include_data_policy=False, include_analysis_framework=False,
)
llm.invoke([
    SystemMessage(content=writer_sys + "\n\n" + writer_instructions),
    HumanMessage(content=analysis),
])

# Fan out to parallel workers via Send()
graph.add_conditional_edges("supervisor", lambda state: [
    Send("worker", {"messages": [...], "sector": st["sector"], ...})
    for st in state["subtasks"]
], ["worker"])

Key design principles:

  • Workers have tools (get_ohlcv_data, get_ticker_list) and the full system prompt. They fetch real data and produce per-sector analysis text.
  • Aggregator has no tools — it receives worker reports in a HumanMessage and synthesizes them into a unified analysis. include_data_policy=False because it works with text, not raw numbers.
  • Writer has no tools — it receives the aggregator's analysis in a HumanMessage and formats it. include_data_policy=False, include_analysis_framework=False because it only formats.
  • All prompts are language-aware (respect LANG setting from AI_CONTEXT_LANG).

See examples/multi_agent.py for the full example.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aipriceaction-0.1.4.tar.gz (78.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aipriceaction-0.1.4-py3-none-any.whl (55.0 kB view details)

Uploaded Python 3

File details

Details for the file aipriceaction-0.1.4.tar.gz.

File metadata

  • Download URL: aipriceaction-0.1.4.tar.gz
  • Upload date:
  • Size: 78.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.20

File hashes

Hashes for aipriceaction-0.1.4.tar.gz
Algorithm Hash digest
SHA256 82f7efc81862a55422d91365c8c56b980bab2987f02ae04b20d29c9e981e3f2b
MD5 4e115544057983f7de07676950743b25
BLAKE2b-256 d3a6277a1a8494ca336def437da7aa1a929b7041a1da5716ce008744ed667a04

See more details on using hashes here.

File details

Details for the file aipriceaction-0.1.4-py3-none-any.whl.

File metadata

File hashes

Hashes for aipriceaction-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 1e27fc79e0ce5c138b87dca9e3a0b032498700a02695fa9f6b3b60c57b4979bb
MD5 fb08017659e07cc7d69d7626c55d55f5
BLAKE2b-256 725fe18da0c37fd779dd7a0cfa4dc8144c41353f0d1a708e33c94bc1dacb76dc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page