Local agent building at scale using Ollama - comprehensive LangGraph patterns for local LLMs
Project description
LangGraph Ollama Local
Learn LangGraph by building agents that run entirely on your hardware.
What This Is
A hands-on tutorial series for building LangGraph agents with local LLMs via Ollama. Each notebook teaches a concept from scratch - no cloud APIs required.
You'll learn:
- LangGraph fundamentals: StateGraph, nodes, edges, reducers
- Tool calling and the ReAct pattern
- Memory and conversation persistence
- Human-in-the-loop workflows
- Self-reflection and critique patterns
- RAG patterns: Basic RAG, Self-RAG, CRAG, Adaptive RAG, Agentic RAG
- Build a Perplexity-style research assistant
Quick Start
Prerequisites
- Python 3.12+
- Ollama running locally or on your LAN:
# Install Ollama curl -fsSL https://ollama.ai/install.sh | sh # Pull a model ollama pull llama3.2:3b
Installation
git clone https://github.com/AbhinaavRamesh/langgraph-ollama-tutorial.git
cd langgraph-ollama-tutorial
# Install with RAG dependencies
pip install -e ".[all]"
# Verify connection
langgraph-local check
Configuration
cp .env.example .env
# Edit .env with your settings
| Variable | Default | Description |
|---|---|---|
OLLAMA_HOST |
127.0.0.1 |
Ollama server address |
OLLAMA_PORT |
11434 |
Ollama server port |
OLLAMA_MODEL |
llama3.2:3b |
Default model |
TAVILY_API_KEY |
(optional) | For web search in CRAG tutorials |
Web Search Setup (Optional)
For CRAG and Perplexity-style tutorials, get a free Tavily API key:
- Sign up at https://tavily.com
- Get your API key from the dashboard
- Add to
.env:TAVILY_API_KEY=tvly-your-key-here
LAN Server with Monitoring
To host Ollama on a GPU machine accessible across your network, use ollama-local-serve:
pip install ollama-local-serve[all]
make init && make up
# Dashboard at http://your-server:3000
Tutorials
Core Patterns (Phase 2)
| # | Notebook | Documentation | What You'll Learn |
|---|---|---|---|
| 01 | Chatbot Basics | docs | StateGraph, nodes, edges, message handling |
| 02 | Tool Calling | docs | Tools, ReAct loop from scratch |
| 03 | Memory & Persistence | docs | Checkpointers, threads, conversation history |
| 04 | Human-in-the-Loop | docs | Interrupts, approvals, resume |
| 05 | Reflection | docs | Generate, Critique, Revise loops |
| 06 | Plan & Execute | docs | Structured outputs, multi-step planning |
| 07 | Research Assistant | docs | Capstone: all patterns combined |
RAG Patterns (Phase 3)
| # | Notebook | Documentation | What You'll Learn |
|---|---|---|---|
| 08 | Basic RAG | docs | Document loading, chunking, embeddings, ChromaDB |
| 09 | Self-RAG | docs | Document grading, hallucination detection, retry loops |
| 10 | CRAG | docs | Web search fallback, corrective retrieval |
| 11 | Adaptive RAG | docs | Query classification, strategy routing |
| 12 | Agentic RAG | docs | Agent-controlled retrieval, multi-step |
| 13 | Perplexity Clone | docs | Citations, source metadata, follow-ups |
Run any notebook:
jupyter lab examples/
RAG Quick Start
Index your documents and start querying:
from langgraph_ollama_local.rag import DocumentIndexer, LocalRetriever
# Index documents
indexer = DocumentIndexer()
indexer.index_directory("sources/")
# Query
retriever = LocalRetriever()
docs = retriever.retrieve_documents("What is Self-RAG?", k=4)
Project Structure
langgraph-ollama-tutorial/
├── examples/
│ ├── core_patterns/ # Tutorials 01-07
│ │ ├── 01_chatbot_basics.ipynb
│ │ └── ...
│ └── rag_patterns/ # Tutorials 08-13
│ ├── 08_basic_rag.ipynb
│ └── ...
├── docs/
│ ├── core_patterns/ # Core pattern documentation
│ └── rag_patterns/ # RAG pattern documentation
├── sources/ # PDF sources for RAG indexing
├── langgraph_ollama_local/
│ ├── config.py # Configuration
│ ├── cli.py # CLI tools
│ └── rag/ # RAG infrastructure
│ ├── embeddings.py # Local embeddings
│ ├── indexer.py # Document indexing
│ ├── retriever.py # Document retrieval
│ └── graders.py # Quality graders
├── tests/ # Test suite
└── pyproject.toml
CLI
langgraph-local check # Verify Ollama connection
langgraph-local config # Show current configuration
langgraph-local list # List available examples
Development
make test # Run tests
make test-int # Integration tests (requires Ollama)
make lint # Linting
make format # Format code
Adapted From
These tutorials are adapted from the official LangGraph documentation (MIT License), optimized for local Ollama deployment.
License
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langgraph_ollama_local-0.1.0.tar.gz.
File metadata
- Download URL: langgraph_ollama_local-0.1.0.tar.gz
- Upload date:
- Size: 120.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
170a7761aa1b07ddbf8f9e1b9815fb8424c4578b8c57d869339121beeadab8cd
|
|
| MD5 |
4dc5d6cee425528c4896cb6e33859cfa
|
|
| BLAKE2b-256 |
1a143bbfe55820a7ddd63929b1cc152b017b1f5ba0dbca3f7a7f2523aaf4b031
|
File details
Details for the file langgraph_ollama_local-0.1.0-py3-none-any.whl.
File metadata
- Download URL: langgraph_ollama_local-0.1.0-py3-none-any.whl
- Upload date:
- Size: 95.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
923b5c8eed5b1daca34e92e0e5a5ba80a1d13c9509c72e08c6fd08257d529aa1
|
|
| MD5 |
8ab98bea8c9e739239df99d75b8fc1ba
|
|
| BLAKE2b-256 |
dc17227a22f0d8d1e2da125847ff652ba4b047a5f4b84bcdec97c3afde1d4566
|