Skip to main content

LangChainKit makes it easier to work with Qwen3 models via vLLM, and simplifies the process of prompting LLMs to return structured outputs using LangChain and Langfuse.

Project description

LangChainKit

LangChainKit simplifies the process of prompting LLMs to return structured outputs using LangChain and LangFuse.


🚀 Features

  • 🔧 Simplified Qwen3 + vLLM integration
    Automatically configure enable_thinking and other complex settings for Qwen3 models when using vLLM.

  • 🧠 Structured Output via LangChain
    Easily prompt the LLM to generate structured outputs, including batch prompting support, with minimal setup.

  • 📊 LangFuse Integration
    Track and evaluate LLM performance using LangFuse, without writing boilerplate code.


Installation

pip install langchainkit

Quick Start

Configuration

Set up your environment variables in .env file:

DEEPSEEK_API_KEY=your deepseek api key
MOONSHOT_API_KEY=...
OPENROUTER_API_KEY=...
ARK_API_KEY=...
DASHSCOPE_API_KEY=...
LOCAL_VLLM_BASE_URL=http://172.20.14.28:8000/v1
LOCAL_VLLM_API_KEY=...

LANGFUSE_SECRET_KEY=...
LANGFUSE_PUBLIC_KEY=...
LANGFUSE_HOST=...
from langchainkit import GeneralLLM,prompt_parsing
from pydantic import BaseModel
from dotenv import load_dotenv

load_dotenv() # load .env file

llm = GeneralLLM.deepseek_chat()

class Response(BaseModel):
    answer: str
    confidence: float

result = prompt_parsing(
    model=Response,
    failed_model=Response(answer="no_answer", confidence=0.0),
    query="What is the capital of France?",
    llm=llm,
    use_langfuse=False 
)
print(result.answer)  # "Paris"
print(result.confidence)  # 1.0

result = prompt_parsing(
    model=Response,
    failed_model=Response(answer="no_answer", confidence=0.0),
    query=["What is the capital of France?",
           "What is the capital of Germany?",
           "What is the capital of Italy?"],
    llm=llm,
    use_langfuse=False
)
for each in result:
    print(each.answer)
    print(each.confidence)
# Paris
# 0.95
# Berlin
# 0.95
# Rome
# 1.0

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • LangChain for the core framework
  • vLLM for high-throughput LLM inference
  • Langfuse for observability and monitoring

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchainkit-0.1.8.tar.gz (12.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchainkit-0.1.8-py3-none-any.whl (12.4 kB view details)

Uploaded Python 3

File details

Details for the file langchainkit-0.1.8.tar.gz.

File metadata

  • Download URL: langchainkit-0.1.8.tar.gz
  • Upload date:
  • Size: 12.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for langchainkit-0.1.8.tar.gz
Algorithm Hash digest
SHA256 7f78be751728bd988d9574ec40d911f8369365e75c231c5fc9b6b6b91a41bd93
MD5 12af0e7c753a79df46d236e134e5c1fc
BLAKE2b-256 1748539af69c5844a6eaefe4f3ebfbb7244534123c9bccd9f27e130b85afb6ec

See more details on using hashes here.

File details

Details for the file langchainkit-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: langchainkit-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 12.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for langchainkit-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 bc543ac39669fa4396ec9baffb916f3afbbcf5f041e0c99b8ef1ff3788d17f0a
MD5 28db8c296f867fc34f0bd8f89821ea02
BLAKE2b-256 3260fdf2dff68e0d87f491caf7833c8b1e95babc4be0b5f6f7cbf74b519f3cd8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page