Skip to main content

An integration package connecting Maritaca AI and LangChain for Brazilian Portuguese language models

Project description

langchain-maritaca

PyPI version Python Downloads License: MIT CI codecov

🇧🇷 Leia em Português

An integration package connecting Maritaca AI and LangChain for Brazilian Portuguese language models.

Author: Anderson Henrique da Silva Location: Minas Gerais, Brasil GitHub: anderson-ufrj

Overview

Maritaca AI provides state-of-the-art Brazilian Portuguese language models, including the Sabiá family of models. This integration allows you to use Maritaca's models seamlessly within the LangChain ecosystem.

Available Models

Model Context Input (R$/1M) Output (R$/1M) Vision
sabia-3.1 128k R$5.00 R$10.00 Yes
sabiazinho-4 128k R$1.00 R$4.00 Yes
sabiazinho-3.1 32k R$1.00 R$3.00 Yes

Note: All models support vision/multimodal inputs (images).

Installation

pip install langchain-maritaca

Setup

Set your Maritaca API key as an environment variable:

export MARITACA_API_KEY="your-api-key"

Or pass it directly to the model:

from langchain_maritaca import ChatMaritaca

model = ChatMaritaca(api_key="your-api-key")

Usage

Basic Usage

from langchain_maritaca import ChatMaritaca

model = ChatMaritaca(
    model="sabia-3.1",
    temperature=0.7,
)

messages = [
    ("system", "Você é um assistente prestativo especializado em cultura brasileira."),
    ("human", "Quais são as principais festas populares do Brasil?"),
]

response = model.invoke(messages)
print(response.content)

Streaming

from langchain_maritaca import ChatMaritaca

model = ChatMaritaca(model="sabia-3.1", streaming=True)

for chunk in model.stream("Conte uma história sobre o folclore brasileiro"):
    print(chunk.content, end="", flush=True)

Async Usage

import asyncio
from langchain_maritaca import ChatMaritaca

async def main():
    model = ChatMaritaca(model="sabia-3.1")
    response = await model.ainvoke("Qual é a receita de pão de queijo?")
    print(response.content)

asyncio.run(main())

With LangChain Expression Language (LCEL)

from langchain_maritaca import ChatMaritaca
from langchain_core.prompts import ChatPromptTemplate

model = ChatMaritaca(model="sabia-3.1")

prompt = ChatPromptTemplate.from_messages([
    ("system", "Você é um especialista em {topic}."),
    ("human", "{question}"),
])

chain = prompt | model

response = chain.invoke({
    "topic": "história do Brasil",
    "question": "Quem foi Tiradentes?"
})
print(response.content)

With Tool Calling (Function Calling)

from langchain_maritaca import ChatMaritaca
from langchain_core.tools import tool

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"O clima em {city} está ensolarado, 25°C"

model = ChatMaritaca(model="sabia-3.1")
model_with_tools = model.bind_tools([get_weather])

response = model_with_tools.invoke("Como está o tempo em São Paulo?")
print(response)

Vision / Multimodal (Images)

All Maritaca models support image inputs. You can send images via URL or base64:

from langchain_maritaca import ChatMaritaca
from langchain_core.messages import HumanMessage

model = ChatMaritaca(model="sabiazinho-4")

# With image URL
response = model.invoke([
    HumanMessage(content=[
        {"type": "text", "text": "O que você vê nesta imagem?"},
        {"type": "image", "url": "https://example.com/image.jpg"}
    ])
])
print(response.content)

# With base64-encoded image
response = model.invoke([
    HumanMessage(content=[
        {"type": "text", "text": "Descreva esta imagem em detalhes"},
        {"type": "image", "base64": "iVBORw0KGgo...", "mime_type": "image/png"}
    ])
])

Also compatible with OpenAI's image_url format:

response = model.invoke([
    HumanMessage(content=[
        {"type": "text", "text": "What's in this image?"},
        {"type": "image_url", "image_url": {"url": "https://example.com/photo.jpg"}}
    ])
])

With Caching

from langchain_core.caches import InMemoryCache
from langchain_core.globals import set_llm_cache
from langchain_maritaca import ChatMaritaca

# Enable caching globally
set_llm_cache(InMemoryCache())

model = ChatMaritaca(model="sabia-3.1")

# First call - hits the API
response1 = model.invoke("Qual é a capital do Brasil?")

# Second call - uses cache (instant, no API cost!)
response2 = model.invoke("Qual é a capital do Brasil?")

With Callbacks for Observability

from langchain_maritaca import ChatMaritaca, CostTrackingCallback, LatencyTrackingCallback

# Create callbacks for monitoring
cost_cb = CostTrackingCallback()
latency_cb = LatencyTrackingCallback()

model = ChatMaritaca(callbacks=[cost_cb, latency_cb])

# Make some calls
model.invoke("Hello!")
model.invoke("How are you?")

# Check metrics
print(f"Total cost: ${cost_cb.total_cost:.6f}")
print(f"Total tokens: {cost_cb.total_tokens}")
print(f"Average latency: {latency_cb.average_latency:.2f}s")
print(f"P95 latency: {latency_cb.p95_latency:.2f}s")

Token Counting & Cost Estimation

from langchain_maritaca import ChatMaritaca
from langchain_core.messages import HumanMessage

model = ChatMaritaca(model="sabia-3.1")

# Count tokens in text
tokens = model.get_num_tokens("Olá, como você está?")
print(f"Tokens: {tokens}")

# Estimate cost before making a request
messages = [HumanMessage(content="Tell me about Brazil")]
estimate = model.estimate_cost(messages, max_output_tokens=1000)
print(f"Estimated cost: ${estimate['total_cost']:.6f}")

Tip: Install with pip install langchain-maritaca[tokenizer] for accurate token counting using tiktoken.

Why Maritaca AI?

Maritaca AI models are specifically trained for Brazilian Portuguese, offering:

  • Native Portuguese Understanding: Better comprehension of Brazilian idioms, expressions, and cultural context
  • Local Data Training: Trained on diverse Brazilian Portuguese data sources
  • Cost-Effective: Competitive pricing for Portuguese language tasks
  • Low Latency: Servers located in Brazil for faster response times

Used in Production

Cidadão.AI - Brazilian government transparency platform powered by AI agents, handling 331K+ requests/month.

Using this package in production? Open an issue to get featured!

API Reference

ChatMaritaca

Main class for interacting with Maritaca AI models.

Parameters:

Parameter Type Default Description
model str "sabia-3.1" Model name to use
temperature float 0.7 Sampling temperature (0.0-2.0)
max_tokens int None Maximum tokens to generate
top_p float 0.9 Top-p sampling parameter
api_key str None Maritaca API key (or use env var)
base_url str "https://chat.maritaca.ai/api" API base URL
timeout float 60.0 Request timeout in seconds
max_retries int 2 Maximum retry attempts
retry_if_rate_limited bool True Auto-retry on rate limit (HTTP 429)
retry_delay float 1.0 Initial delay between retries (seconds)
retry_max_delay float 60.0 Maximum delay between retries (seconds)
retry_multiplier float 2.0 Multiplier for exponential backoff
streaming bool False Enable streaming responses

Development

Setup

# Clone the repository
git clone https://github.com/anderson-ufrj/langchain-maritaca.git
cd langchain-maritaca

# Install dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run linting
ruff check .
ruff format .

# Run type checking
mypy langchain_maritaca

Running Tests

# Unit tests only
pytest tests/unit_tests/

# Integration tests (requires MARITACA_API_KEY)
pytest tests/integration_tests/

# With coverage
pytest --cov=langchain_maritaca --cov-report=html

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'feat: add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Changelog

See CHANGELOG.md for a list of changes.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Related Projects

  • LangChain - Building applications with LLMs through composability
  • Maritaca AI - Brazilian Portuguese language models

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_maritaca-0.4.1.tar.gz (115.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_maritaca-0.4.1-py3-none-any.whl (26.7 kB view details)

Uploaded Python 3

File details

Details for the file langchain_maritaca-0.4.1.tar.gz.

File metadata

  • Download URL: langchain_maritaca-0.4.1.tar.gz
  • Upload date:
  • Size: 115.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for langchain_maritaca-0.4.1.tar.gz
Algorithm Hash digest
SHA256 2e7861035156602de02fe2df113ba58e0259852cc4661c35086ce437591727b7
MD5 e4476aac036cfde85523679de9d24c50
BLAKE2b-256 c0174f2c9e36a25af31184fe9d686aca80cc281462df4ed7922c3a84dcce9d1b

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_maritaca-0.4.1.tar.gz:

Publisher: publish.yml on anderson-ufrj/langchain-maritaca

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file langchain_maritaca-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_maritaca-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a8e65f6e1bccda64d5d9ef39099d410c9c5bfe23dfb0e9dd16cf5b7e2855ee51
MD5 feef6cd1181730b436295eeec77d57a8
BLAKE2b-256 6b2decafac6fe3764ad03c87a89974b23a65d12152d6480eef70a2d2b9558322

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_maritaca-0.4.1-py3-none-any.whl:

Publisher: publish.yml on anderson-ufrj/langchain-maritaca

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page