Skip to main content

LangChain components for Dartmouth-hosted models.

Project description

Dartmouth LangChain

documentation tests

LangChain components for Dartmouth-hosted models.

Getting started

  1. Install the package:
pip install langchain_dartmouth
  1. Obtain a Dartmouth API key from developer.dartmouth.edu
  2. Store the API key as an environment variable called DARTMOUTH_API_KEY:
export DARTMOUTH_API_KEY=<your_key_here>
  1. Obtain a Dartmouth Chat API key
  2. Store the API key as an environment variable called DARTMOUTH_CHAT_API_KEY
export DARTMOUTH_CHAT_API_KEY=<your_key_here>

[!NOTE] You may want to make the environment variables permanent or use a .env file

What is this?

This library provides an integration of Darmouth-provided generative AI resources with the LangChain framework.

There are three main components currently implemented:

  • Large Language Models
  • Embedding models
  • Reranking models

All of these components are based on corresponding LangChain base classes and can be used seamlessly wherever the corresponding LangChain objects can be used.

Using the library

Large Language Models

There are three kinds of Large Language Models (LLMs) provided by Dartmouth:

  • On-premises:
    • Base models without instruction tuning (require no special prompt format)
    • Instruction-tuned models (also known as Chat models) requiring specific prompt formats
  • Cloud:
    • Third-party, pay-as-you-go chat models (e.g., OpenAI's GPT 4.1, Google Gemini)

Using a Dartmouth-hosted base language model:

from langchain_dartmouth.llms import DartmouthLLM

llm = DartmouthLLM(model_name="codellama-13b-hf")

response = llm.invoke("Write a Python script to swap two variables.")
print(response)

Using a Dartmouth-hosted chat model:

from langchain_dartmouth.llms import ChatDartmouth


llm = ChatDartmouth(model_name="meta.llama-3-2-11b-vision-instruct")

response = llm.invoke("Hi there!")

print(response.content)

[!NOTE] The required prompt format is enforced automatically when you are using ChatDartmouth.

Using a Dartmouth-provided third-party chat model:

from langchain_dartmouth.llms import ChatDartmouth


llm = ChatDartmouth(model_name="openai.gpt-4.1-mini-2025-04-14")

response = llm.invoke("Hi there!")

Embeddings model

Using a Dartmouth-hosted embeddings model:

from langchain_dartmouth.embeddings import DartmouthEmbeddings


embeddings = DartmouthEmbeddings()

embeddings.embed_query("Hello? Is there anybody in there?")

print(response)

Reranking

Using a Dartmouth-hosted reranking model:

from langchain_dartmouth.retrievers.document_compressors import DartmouthReranker
from langchain.docstore.document import Document


docs = [
    Document(page_content="Deep Learning is not..."),
    Document(page_content="Deep learning is..."),
    ]

query = "What is Deep Learning?"
reranker = DartmouthReranker(model_name="bge-reranker-v2-m3")
ranked_docs = reranker.compress_documents(query=query, documents=docs)

print(ranked_docs)

Available models

For a list of available models, check the respective list() method of each class.

How to cite

If you are using langchain_dartmouth as part of a scientific publication, we would greatly appreciate a citation of the following paper:

@inproceedings{10.1145/3708035.3736076,
  author = {Stone, Simon and Crossett, Jonathan and Luker, Tivon and Leligdon, Lora and Cowen, William and Darabos, Christian},
  title = {Dartmouth Chat - Deploying an Open-Source LLM Stack at Scale},
  year = {2025},
  isbn = {9798400713989},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  doi = {10.1145/3708035.3736076},
  booktitle = {Practice and Experience in Advanced Research Computing 2025: The Power of Collaboration},
  articleno = {43},
  numpages = {5}
}

License

Created by Simon Stone for Dartmouth College under Creative Commons CC BY-NC 4.0 License.
For questions, comments, or improvements, email Research Computing.
Creative Commons License

Except where otherwise noted, the example programs are made available under the OSI-approved MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_dartmouth-0.3.0.tar.gz (1.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_dartmouth-0.3.0-py3-none-any.whl (23.9 kB view details)

Uploaded Python 3

File details

Details for the file langchain_dartmouth-0.3.0.tar.gz.

File metadata

  • Download URL: langchain_dartmouth-0.3.0.tar.gz
  • Upload date:
  • Size: 1.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for langchain_dartmouth-0.3.0.tar.gz
Algorithm Hash digest
SHA256 37a15bda17c568c37116e107b9ee53ba44c1bbecb909fa3c0cf6baa7f8b18bf0
MD5 435b78aed0d9199665e65fe10183e052
BLAKE2b-256 0ba5f291b68bf1718510d9b9d5937e20df19912645dab2ff655d2d88a94dcdfc

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_dartmouth-0.3.0.tar.gz:

Publisher: pypi.yml on dartmouth/langchain-dartmouth

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file langchain_dartmouth-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_dartmouth-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1f8785374e9f8de0b78db83f72bacce7cd83f79210105144a5f29de061c307c5
MD5 4e81304036db35eefb5a46a5c61e225d
BLAKE2b-256 d28ea5ffdfe50f0bc1135259ae3f3d9416f5eeeedac209d00094006dbe9101b9

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_dartmouth-0.3.0-py3-none-any.whl:

Publisher: pypi.yml on dartmouth/langchain-dartmouth

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page