Skip to main content

An integration package connecting Tavily and LangChain

Project description

🦜️🔗 LangChain Tavily

PyPI version License: MIT Downloads

This package contains the LangChain integration with Tavily

Introducing tavily-crawl + tavily-map in v0.2.4!

Two powerful new tools have joined the Tavily family! Upgrade now to access:

pip install -U langchain-tavily

Don't miss out on these exciting new features! Check out the full documentation to learn more.


Installation

pip install -U langchain-tavily

Credentials

We also need to set our Tavily API key. You can get an API key by visiting this site and creating an account.

import getpass
import os

if not os.environ.get("TAVILY_API_KEY"):
    os.environ["TAVILY_API_KEY"] = getpass.getpass("Tavily API key:\n")

Tavily Search

Here we show how to instantiate an instance of the Tavily search tool. The tool accepts various parameters to customize the search. After instantiation we invoke the tool with a simple query. This tool allows you to complete search queries using Tavily's Search API endpoint.

Instantiation

The tool accepts various parameters during instantiation:

  • max_results (optional, int): Maximum number of search results to return. Default is 5.
  • topic (optional, str): Category of the search. Can be "general", "news", or "finance". Default is "general".
  • include_answer (optional, bool | str): Include an answer to original query in results. Default is False. String options include "basic" (quick answer) or "advanced" (detailed answer). If True, defaults to "basic".
  • include_raw_content (optional, bool | str): Include the cleaned and parsed HTML content of each search result. "markdown" returns search result content in markdown format. "text" returns the plain text from the results and may increase latency. If True, defaults to "markdown"
  • include_images (optional, bool): Include a list of query related images in the response. Default is False.
  • include_image_descriptions (optional, bool): Include descriptive text for each image. Default is False.
  • search_depth (optional, str): Depth of the search, either "basic" or "advanced". Default is "basic".
  • time_range (optional, str): The time range back from the current date to filter results - "day", "week", "month", or "year". Default is None.
  • include_domains (optional, List[str]): List of domains to specifically include. Default is None.
  • exclude_domains (optional, List[str]): List of domains to specifically exclude. Default is None.
  • country (optional, str): Boost search results from a specific country. This will prioritize content from the selected country in the search results. Available only if topic is general.

For a comprehensive overview of the available parameters, refer to the Tavily Search API documentation

from langchain_tavily import TavilySearch

tool = TavilySearch(
    max_results=5,
    topic="general",
    # include_answer=False,
    # include_raw_content=False,
    # include_images=False,
    # include_image_descriptions=False,
    # search_depth="basic",
    # time_range="day",
    # include_domains=None,
    # exclude_domains=None,
    # country=None
)

Invoke directly with args

The Tavily search tool accepts the following arguments during invocation:

  • query (required): A natural language search query
  • The following arguments can also be set during invocation : include_images, search_depth , time_range, include_domains, exclude_domains, include_images
  • For reliability and performance reasons, certain parameters that affect response size cannot be modified during invocation: include_answer and include_raw_content. These limitations prevent unexpected context window issues and ensure consistent results.

NOTE: If you set an argument during instantiation this value will persist and overwrite the value passed during invocation.

# Basic query
tool.invoke({"query": "What happened at the last wimbledon"})

output:

{
 'query': 'What happened at the last wimbledon',
 'follow_up_questions': None,
 'answer': None,
 'images': [],
 'results':
 [{'url': 'https://en.wikipedia.org/wiki/Wimbledon_Championships',
   'title': 'Wimbledon Championships - Wikipedia',
   'content': 'Due to the COVID-19 pandemic, Wimbledon 2020 was cancelled ...',
   'score': 0.62365627198,
   'raw_content': None},
    ......................................................................
    {'url': 'https://www.cbsnews.com/news/wimbledon-men-final-carlos-alcaraz-novak-djokovic/',
    'title': "Carlos Alcaraz beats Novak Djokovic at Wimbledon men's final to ...",
    'content': 'In attendance on Sunday was Catherine, the Princess of Wales ...',
    'score': 0.5154731446,
    'raw_content': None}],
  'response_time': 2.3
}

Agent Tool Calling

We can use our tools directly with an agent executor by binding the tool to the agent. This gives the agent the ability to dynamically set the available arguments to the Tavily search tool.

In the below example when we ask the agent to find "What is the most popular sport in the world? include only wikipedia sources" the agent will dynamically set the argments and invoke Tavily search tool : Invoking tavily_search with {'query': 'most popular sport in the world', 'include_domains': ['wikipedia.org'], 'search_depth': 'basic'}

# !pip install -qU langchain langchain-openai langchain-tavily
from typing import Any, Dict, Optional
import datetime

from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain.chat_models import init_chat_model
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
from langchain_tavily import TavilySearch
from langchain.schema import HumanMessage, SystemMessage

# Initialize LLM
llm = init_chat_model(model="gpt-4o", model_provider="openai", temperature=0)

# Initialize Tavily Search Tool
tavily_search_tool = TavilySearch(
    max_results=5,
    topic="general",
)

# Set up Prompt with 'agent_scratchpad'
today = datetime.datetime.today().strftime("%D")
prompt = ChatPromptTemplate.from_messages([
    ("system", f"""You are a helpful reaserch assistant, you will be given a query and you will need to
    search the web for the most relevant information. The date today is {today}."""),
    MessagesPlaceholder(variable_name="messages"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),  # Required for tool calls
])

# Create an agent that can use tools
agent = create_openai_tools_agent(
    llm=llm,
    tools=[tavily_search_tool],
    prompt=prompt
)

# Create an Agent Executor to handle tool execution
agent_executor = AgentExecutor(agent=agent, tools=[tavily_search_tool], verbose=True)

user_input =  "What is the most popular sport in the world? include only wikipedia sources"

# Construct input properly as a dictionary
response = agent_executor.invoke({"messages": [HumanMessage(content=user_input)]})

Tavily Extract

Here we show how to instantiate an instance of the Tavily extract tool. After instantiation we invoke the tool with a list of URLs. This tool allows you to extract content from URLs using Tavily's Extract API endpoint.

Instantiation

The tool accepts various parameters during instantiation:

  • extract_depth (optional, str): The depth of the extraction, either "basic" or "advanced". Default is "basic ".
  • include_images (optional, bool): Whether to include images in the extraction. Default is False.
  • format (optional, str): The format of the extracted web page content. "markdown" returns content in markdown format. "text" returns plain text and may increase latency.

For a comprehensive overview of the available parameters, refer to the Tavily Extract API documentation

from langchain_tavily import TavilyExtract

tool = TavilyExtract(
    extract_depth="advanced",
    include_images=False,
    format="markdown"
)

Invoke directly with args

The Tavily extract tool accepts the following arguments during invocation:

  • urls (required): A list of URLs to extract content from.
  • Both extract_depth and include_images can also be set during invocation

NOTE: If you set an argument during instantiation this value will persist and overwrite the value passed during invocation.

# Extract content from a URL
result = tool.invoke({
    "urls": ["https://en.wikipedia.org/wiki/Lionel_Messi"]
})

output:

{
    'results': [{
        'url': 'https://en.wikipedia.org/wiki/Lionel_Messi',
        'raw_content': 'Lionel Messi\nLionel Andrés "Leo" Messi...',
        'images': []
    }],
    'failed_results': [],
    'response_time': 0.79
}

Tavily Crawl

Here we show how to instantiate an instance of the Tavily crawl tool. After instantiation we invoke the tool with a URL. This tool allows you to crawl websites using Tavily's Crawl API endpoint.

Instantiation

The tool accepts various parameters during instantiation:

  • max_depth (optional, int): Max depth of the crawl from base URL. Default is 1.
  • max_breadth (optional, int): Max number of links to follow per page. Default is 20.
  • limit (optional, int): Total number of links to process before stopping. Default is 50.
  • instructions (optional, str): Natural language instructions to guide the crawler. Default is None.
  • select_paths (optional, List[str]): Regex patterns to select specific URL paths. Default is None.
  • select_domains (optional, List[str]): Regex patterns to select specific domains. Default is None.
  • exclude_paths (optional, List[str]): Regex patterns to exclude URLs with specific path patterns
  • exclude_domains (optional, List[str]): Regex patterns to exclude specific domains or subdomains from crawling
  • allow_external (optional, bool): Allow following external domain links. Default is False.
  • include_images (optional, bool): Whether to include images in the crawl results.
  • categories (optional, str): Filter URLs by predefined categories. Can be "Careers", "Blogs", "Documentation", "About", "Pricing", "Community", "Developers", "Contact", or "Media". Default is None.
  • extract_depth (optional, str): Depth of content extraction, either "basic" or "advanced". Default is "basic".
  • format (optional, str): The format of the extracted web page content. "markdown" returns content in markdown format. "text" returns plain text and may increase latency.

For a comprehensive overview of the available parameters, refer to the Tavily Crawl API documentation

from langchain_tavily import TavilyCrawl

tool = TavilyCrawl(
    max_depth=1,
    max_breadth=20,
    limit=50,
    # instructions=None,
    # select_paths=None,
    # select_domains=None,
    # exclude_paths=None,
    # exclude_domains=None,
    # allow_external=False,
    # include_images=False,
    # categories=None,
    # extract_depth=None
    # format=None
)

Invoke directly with args

The Tavily crawl tool accepts the following arguments during invocation:

  • url (required): The root URL to begin the crawl.
  • All other parameters can also be set during invocation: max_depth, max_breadth, limit, instructions, select_paths, select_domains, exclude_paths, exclude_domains,allow_external, include_images, categories, and extract_depth

NOTE: If you set an argument during instantiation this value will persist and overwrite the value passed during invocation.

# Basic crawl of a website
result = tool.invoke({
    "url": "https://docs.tavily.com",
    "instructions": "Find SDK documentation",
    "categories": ["Documentation"]
})

output:

{
    'base_url': 'https://docs.tavily.com',
    'results': [{
        'url': 'https://docs.tavily.com/sdk/python',
        'raw_content': 'Python SDK Documentation...',
        'images': []
    },
    {
        'url': 'https://docs.tavily.com/sdk/javascript',
        'raw_content': 'JavaScript SDK Documentation...',
        'images': []
    }],
    'response_time': 10.28
}

Tavily Map

Here we show how to instantiate an instance of the Tavily Map tool. After instantiation we invoke the tool with a URL. This tool allows you to create a structured map of website URLs using Tavily's Map API endpoint.

Instantiation

The tool accepts various parameters during instantiation:

  • max_depth (optional, int): Max depth of the mapping from base URL. Default is 1.
  • max_breadth (optional, int): Max number of links to follow per page. Default is 20.
  • limit (optional, int): Total number of links to process before stopping. Default is 50.
  • instructions (optional, str): Natural language instructions to guide the mapping.
  • select_paths (optional, List[str]): Regex patterns to select specific URL paths.
  • select_domains (optional, List[str]): Regex patterns to select specific domains.
  • exclude_paths (optional, List[str]): Regex patterns to exclude URLs with specific path patterns
  • exclude_domains (optional, List[str]): Regex patterns to exclude specific domains or subdomains from mapping
  • allow_external (optional, bool): Allow following external domain links. Default is False.
  • categories (optional, str): Filter URLs by predefined categories ("Careers", "Blogs", "Documentation", "About", "Pricing", "Community", "Developers", "Contact", "Media").

For a comprehensive overview of the available parameters, refer to the Tavily Map API documentation

from langchain_tavily import TavilyMap

tool = TavilyMap(
    max_depth=2,
    max_breadth=20,
    limit=50,
    # instructions=None,
    # select_paths=None,
    # select_domains=None,
    # exclude_paths=None,
    # exclude_domains=None,
    # allow_external=False,
    # categories=None,
)

Invoke directly with args

The Tavily map tool accepts the following arguments during invocation:

  • url (required): The root URL to begin the mapping.
  • All other parameters can also be set during invocation: max_depth, max_breadth, limit, instructions, select_paths, select_domains, exclude_paths, exclude_domains, allow_external, and categories.

NOTE: If you set an argument during instantiation this value will persist and overwrite the value passed during invocation.

# Basic mapping of a website
result = tool.invoke({
    "url": "https://docs.tavily.com",
    "instructions": "Find SDK documentation",
    "categories": ["Documentation"]
})

output:

{
    'base_url': 'https://docs.tavily.com',
    'results': ['https://docs.tavily.com/sdk', 'https://docs.tavily.com/sdk/python/reference', 'https://docs.tavily.com/sdk/javascript/reference', 'https://docs.tavily.com/sdk/python/quick-start', 'https://docs.tavily.com/sdk/javascript/quick-start']
    'response_time': 10.28
}

Tavily Research Agent

This example demonstrates how to build a powerful web research agent using Tavily's search and extract Langchain tools.

Features

  • Internet Search: Query the web for up-to-date information using Tavily's search API
  • Content Extraction: Extract and analyze specific content from web pages
  • Seamless Integration: Works with OpenAI's function calling capability for reliable tool use
# !pip install -qU langchain langchain-openai langchain-tavily
from typing import Any, Dict, Optional
import datetime

from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain.chat_models import init_chat_model
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
from langchain_tavily import TavilySearch, TavilyExtract
from langchain.schema import HumanMessage, SystemMessage

# Initialize LLM
llm = ChatOpenAI(temperature=0, model="gpt-4o")

# Initialize Tavily Search Tool
tavily_search_tool = TavilySearch(
    max_results=5,
    topic="general",
)
# Initialize Tavily Extract Tool
tavily_extract_tool = TavilyExtract()

tools = [tavily_search_tool, tavily_extract_tool]

# Set up Prompt with 'agent_scratchpad'
today = datetime.datetime.today().strftime("%D")
prompt = ChatPromptTemplate.from_messages([
    ("system", f"""You are a helpful reaserch assistant, you will be given a query and you will need to
    search the web for the most relevant information then extract content to gain more insights. The date today is {today}."""),
    MessagesPlaceholder(variable_name="messages"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),  # Required for tool calls
])
# Create an agent that can use tools
agent = create_openai_tools_agent(
    llm=llm,
    tools=tools,
    prompt=prompt
)

# Create an Agent Executor to handle tool execution
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

user_input =  "Research the latest developments in quantum computing and provide a detailed summary of how it might impact cybersecurity in the next decade."

# Construct input properly as a dictionary
response = agent_executor.invoke({"messages": [HumanMessage(content=user_input)]})

Tavily Search and Crawl Agent Example

This example demonstrates how to build a powerful web research agent using Tavily's search and crawl Langchain tools to find and analyze information from websites.

Features

  • Internet Search: Query the web for up-to-date information using Tavily's search API
  • Website Crawling: Crawl websites to find specific information and content
  • Seamless Integration: Works with OpenAI's function calling capability for reliable tool use
from typing import Any, Dict, Optional
import datetime

from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain.chat_models import init_chat_model
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
from langchain_tavily import TavilySearch, TavilyCrawl
from langchain.schema import HumanMessage, SystemMessage

# Initialize LLM
llm = init_chat_model(model="gpt-4.1", model_provider="openai", temperature=0)

# Initialize Tavily Search Tool
tavily_search_tool = TavilySearch(
    max_results=5,
    topic="general",
)

tavily_crawl_tool = TavilyCrawl()

# Set up Prompt with 'agent_scratchpad'
today = datetime.datetime.today().strftime("%D")
prompt = ChatPromptTemplate.from_messages([
    ("system", f"""You are a helpful reaserch assistant, you will be given a query and you will need to
    search the web and crawl the web for the most relevant information. The date today is {today}."""),
    MessagesPlaceholder(variable_name="messages"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),  # Required for tool calls
])

# Create an agent that can use tools
agent = create_openai_tools_agent(
    llm=llm,
    tools=[tavily_search_tool, tavily_crawl_tool],
    prompt=prompt
)

# Create an Agent Executor to handle tool execution
agent_executor = AgentExecutor(agent=agent, tools=[tavily_search_tool, tavily_crawl_tool], verbose=True)

user_input =  "Find the base url of apple and then crawl the base url to find all iphone models"

# Construct input properly as a dictionary
response = agent_executor.invoke({"messages": [HumanMessage(content=user_input)]})

This example shows how to:

  1. Initialize both Tavily Search and Crawl tools
  2. Set up an agent with a custom prompt that includes the current date
  3. Create an agent executor that can use both tools
  4. Process a user query that requires both searching and crawling capabilities

The agent will first use the search tool to find Apple's base URL, then use the crawl tool to explore the website and find information about iPhone models.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_tavily-0.2.4.tar.gz (20.7 kB view details)

Uploaded Source

Built Distribution

langchain_tavily-0.2.4-py3-none-any.whl (24.5 kB view details)

Uploaded Python 3

File details

Details for the file langchain_tavily-0.2.4.tar.gz.

File metadata

  • Download URL: langchain_tavily-0.2.4.tar.gz
  • Upload date:
  • Size: 20.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.11.9 Darwin/24.5.0

File hashes

Hashes for langchain_tavily-0.2.4.tar.gz
Algorithm Hash digest
SHA256 68281a47e2e45e857a0d8087478f2638ea7e7c3a8a61a00cc0a2e3b7541240ea
MD5 d69d56321671fed58cc22ff263705f84
BLAKE2b-256 15dffaf9b205c93a048b96d013a88623bf31f974cf8d2785fc877d239861f1e8

See more details on using hashes here.

File details

Details for the file langchain_tavily-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: langchain_tavily-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 24.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.11.9 Darwin/24.5.0

File hashes

Hashes for langchain_tavily-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 36a80f2dd331cd68e26f4f6e6c3602e2615df4da21da05d787b904f7a564d487
MD5 8ed7aedcbf4fe22a5053419ad1b9fc63
BLAKE2b-256 61109c7fc924a4a099840d03cf7b58da337f9ca54949bb0cfd26fe5320959c6c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page