Skip to main content

GPTStonks API allows interacting with financial data sources using natural language.

Project description

Logo

Join Waitlist Badge Youtube Channel Badge X Follow Us Badge Discord Badge Docker Badge

Hugging Face Badge LangChain Badge FastAPI Badge OpenBB Badge

GPTStonks Chatbot API

Welcome to the GPTStonks Chatbot API documentation! This API allows you to interact with a powerful financial chatbot built on top of the openbb framework. Whether you're a developer looking to integrate financial chat capabilities into your application or a trader seeking automated financial insights, this API is designed to provide you with a seamless and customizable experience.

Table of Contents

Introduction 🌟

GPTStonks is a financial chatbot powered by LLMs and enhanced with the openbb framework. It provides natural language conversation capabilities for financial topics, making it an ideal choice for a wide range of financial applications, including:

  • Learning about the financial markets
  • Improving trading strategies
  • Financial news analysis: sentiment, trends, etc.
  • Customer support for financial institutions

This API allows you to integrate the GPTStonks financial chatbot into your projects, enabling real-time financial chat interactions with users.

Features 🚀

  • Real-time Financial Chat: Engage in natural language conversations about financial topics.
  • Customizable Responses: Tailor the chatbot's responses to suit your specific use case.
  • Easy Integration: Built on FastAPI, this API is designed for straightforward integration into your application or platform.
  • Extensive Documentation: Detailed documentation and examples to help you get started quickly.

Supported LLM Providers

  • Llama.cpp: optimized implementations of the most popular open source LLMs for inference over CPU and GPU. See their docs for more details on supported models, which include Mixtral, Llama 2 and Zephyr among others. Many quantized models (GGUF) can be found in Hugging Face under the user TheBloke.
  • Amazon Bedrock: foundation models from a variety of providers, including Anthropic and Amazon.
  • OpenAI: GPT family of foundation models. For now only instruct versions are supported, such as gpt-3.5-turbo-instruct. Chat versions will be added soon.
  • Vertex AI: similar to Amazon Bedrock but provided by Google. This integration is in alpha version, not recommended for now.

Supported Embeddings Providers

Getting Started 🛠️

Prerequisites

Before you begin, make sure you have Docker installed on your system.

Full deployment [api + frontend + db] (recommended)

  1. Clone this repository to your local machine:
git clone https://github.com/gptstonks/api.git
  1. Check the .env.template file for the required environment variables. Modify the values as desired and save the file.

  2. Run the docker-compose file:

docker-compose up
  1. If you didn't change any port configuration, navigate to http://localhost:3000 to access the frontend.

  2. (optional) If you want to use your OpenBB PAT to access resources, you can set it from the API Keys section (sidebar) in the frontend.

Installation 🛸

  1. Set up environment variables by creating a .env file in the project directory with the contents specified in .env.template.

  2. [Highly Recommended] Option 1: use the latest Docker image in ghcr.io.

docker run -it -p 8000:8000 --env-file .env ghcr.io/gptstonks/api:main
  • Option 2: build from source.

    1. Install PDM.

    2. Clone this repository to your local machine:

    git clone https://github.com/GPTStonks/api.git
    
    1. Navigate to the project directory:
    cd projects/gptstonks_api
    
    1. Install the required dependencies:
    pdm install --no-editable --no-self
    
    1. Create openssl.cnf to allow legacy TLS renegotiation, needed for OECD data in OpenBB.
    echo 'openssl_conf = openssl_init\n\
    \n\
    [openssl_init]\n\
    ssl_conf = ssl_sect\n\
    \n\
    [ssl_sect]\n\
    system_default = system_default_sect\n\
    \n\
    [system_default_sect]\n\
    Options = UnsafeLegacyRenegotiation'\
    > openssl.cnf
    
    1. Start the API:
    uvicorn gptstonks.api.main:app --host 0.0.0.0 --port 8000
    

Now your GPTStonks Financial Chatbot API is up and running!

For Production Environments 🏭

For production environments, additional steps are necessary to ensure security and stability:

Ensure that uvicorn is configured with SSL certificates for secure HTTPS communication.

Build the Docker image from source:

docker build -t gptstonks-api:v0.1_pro -f Dockerfile.pro .

Now you can run the Docker image with the following command:

docker run -it -p 443:8000 \
-v /etc/letsencrypt/live/api.gptstonks.net/fullchain.pem:/api/cert.pem \
-v /etc/letsencrypt/live/api.gptstonks.net/privkey.pem:/api/key.pem \
--env-file .env gptstonks-api:v0.1_pro

Usage💡

To use the GPTStonks Financial Chatbot API, send HTTP requests to the provided endpoints. You can interact with the chatbot by sending messages and receiving responses in real-time.

API Endpoints🌐

Check http://localhost:8000/docs once the API is started to access the endpoints' documentation.

Configuration with environment variables ⚙️

Env variable Required Default Description
MONGO_URI Yes - MongoDB's URI to connect.
MONGO_DBNAME Yes - MongoDB's database to use.
AUTOLLAMAINDEX_VSI_GDRIVE_URI No None (Not downloaded) Google Drive's URL to download Vector Store Index (VSI).
AUTOLLAMAINDEX_EMBEDDING_MODEL_ID No "local:BAAI/bge-large-en-v1.5" Embedding model ID to use with AutoLlamaIndex (must match with VSI).
AUTOLLAMAINDEX_SIMILARITY_POSTPROCESSOR_CUTOFF No 0.5 Minimum similarity required when retrieving similar documents.
AUTOLLAMAINDEX_REMOVE_METADATA_POSTPROCESSOR No None (Postprocessor used) Whether or not to use a metadata postprocessor.
AUTOLLAMAINDEX_VSI_PATH Yes - Path to the downloaded VSI. If AUTOLLAMAINDEX_VSI_GDRIVE_URI is given, they will match automatically.
AUTOLLAMAINDEX_LLM_CONTEXT_WINDOW No 4096 Context window to use when a Hugging Face model is loaded in AutoLlamaIndex.
AUTOLLAMAINDEX_QA_TEMPLATE No None (LlamaIndex's Default QA Template) Template to use with LlamaIndex question-answering step.
AUTOLLAMAINDEX_REFINE_TEMPLATE No None (LlamaIndex's Default Refine Template) Template to use with AutoLlamaIndex refine step.
AUTOLLAMAINDEX_VIR_SIMILARITY_TOP_K No 3 K most similar elements are retrieved with vector search.
AUTOLLAMAINDEX_RETRIEVER_TYPE No None (Hybrid retrieved used) Whether or not to use BM25 with vector search (hybrid) or only vector search.
AUTOMULTISTEPQUERYENGINE_QA_TEMPLATE No None (LlamaIndex's Default QA Template) Template to use with AutoMultiStepQueryEngine question-answering step.
AUTOMULTISTEPQUERYENGINE_REFINE_TEMPLATE No None (LlamaIndex's Default Refine Template) Template to use with AutoMultiStepQueryEngine refine step.
AUTOMULTISTEPQUERYENGINE_STEPDECOMPOSE_QUERY_PROMPT No None (LlamaIndex's Default Step Decompose Template) Template to use with AutoMultiStepQueryEngine step decompose.
AUTOMULTISTEPQUERYENGINE_INDEX_SUMMARY No "Useful to search information on the Internet." The index summary is used by the multi-step agent to understand its own capabilities and formulate new questions.
AGENT_REQUEST_TIMEOUT No 20 No. seconds to wait before timeout when an API LLM is used (e.g., OpenAI).
AGENT_EARLY_STOPPING_METHOD No "force" How the model should return its final output when early stopping is applied.
LLM_TEMPERATURE No 0.1 Temperature to use when sampling.
LLM_MAX_TOKENS No 256 No. max. tokens to sample.
LLM_TOP_P No 1.0 Top-p parameter to apply when sampling.
LLM_MODEL_ID Yes - ID of the model to use. See tutorials in GPTStonks' blog for more details.
LLM_CHAT_MODEL_SYSTEM_MESSAGE No "You write concise and complete answers." System message when using chat models (e.g., GPT-4).
LLM_VERTEXAI_CLOUD_LOCATION No None Google Cloud location to use with VertexAI.
LLM_LLAMACPP_CONTEXT_WINDOW No 4000 Context window to apply when a model is loaded with Llama.cpp.
LLM_HF_DEVICE No -1 Device to use with Hugging Face's models.
LLM_HF_DISABLE_SAMPLING No False Whether or not to disable sampling when generating with Hugging Face's models.
LLM_HF_DEVICE_MAP No None (HuggingFacePipeline default) Device map to use with Hugging Face's models.
LLM_HF_BITS No 4 No. bits of quantized Hugging Face's model.
LLM_HF_DISABLE_EXLLAMA No False Whether or not to disable ExLlama with Hugging Face's models.
LLM_HF_TRUST_REMOTE_CODE No False Whether or not to trust remote code with Hugging Face's models.
OPENBBCHAT_TOOL_DESCRIPTION Yes - OpenBB Platform's tool description for the LLM agent.
SEARCH_TOOL_DESCRIPTION No None (Default DDG Search description) DDG's search tool description for the LLM agent.
WIKIPEDIA_TOOL_DESCRIPTION No None (Default Wikipedia description) Wikipedia tool description for the LLM agent.
CUSTOM_GPTSTONKS_PREFIX No None (Default LangChain agent prefix) Prefix to use with LLM agent.

Contributing 🤝

We welcome contributions from the community! If you have any suggestions, bug reports, or want to contribute to the project, feel free to open issues or propose changes.

License 📃

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer

GPTStonks Chat serves as an interface for accessing financial data and general knowledge. It is not intended to provide financial or investment advice.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gptstonks_api-0.0.1.tar.gz (143.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gptstonks_api-0.0.1-py3-none-any.whl (145.4 kB view details)

Uploaded Python 3

File details

Details for the file gptstonks_api-0.0.1.tar.gz.

File metadata

  • Download URL: gptstonks_api-0.0.1.tar.gz
  • Upload date:
  • Size: 143.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: pdm/2.15.0 CPython/3.10.12 Linux/6.5.0-1018-azure

File hashes

Hashes for gptstonks_api-0.0.1.tar.gz
Algorithm Hash digest
SHA256 42989435246a425e0123ae0001cd6a7d901c68dbc615b105113c9c52911813ce
MD5 8a67f1fc2f97bbe7849603a23a3cc7e8
BLAKE2b-256 858e2f5401789cfbbbb2b37834c3445b0883d487c79f209157a7ef482c07f7eb

See more details on using hashes here.

File details

Details for the file gptstonks_api-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: gptstonks_api-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 145.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: pdm/2.15.0 CPython/3.10.12 Linux/6.5.0-1018-azure

File hashes

Hashes for gptstonks_api-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 749628eb4d184162bf1b2e0f0cc385611f696578362d2579d651e3d13697e672
MD5 84d830186cb7f5782e2d9edba98999f5
BLAKE2b-256 6b97c15e56b8ef82ed72a5ffdedeb1594b6dd67fedc71946dacbd12697cafd9c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page