Skip to main content

LangChain components for Dartmouth-hosted models.

Project description

Dartmouth LangChain

LangChain components for Dartmouth-hosted models.

Getting started

  1. Install the package:
pip install dartmouth-langchain
  1. Obtain a Dartmouth API key from developer.dartmouth.edu
  2. Store the API key as an environment variable called DARTMOUTH_API_KEY:
export DARTMOUTH_API_KEY=<your_key_here>

What is this?

This library provides an integration of Darmouth-hosted generative AI resources with the LangChain framework.

There are three main components currently implemened:

  • Large Language Models
  • Embedding models
  • Reranking models

All of these components are based on corresponding LangChain base classes and can be used seamlessly wherever the corresponding LangChain objects can be used.

Using the library

Large Language Models

There are two kinds of Large Language Models (LLMs) hosted by Dartmouth:

  • Base models without instruction tuning (require no special prompt format)
  • Instruction-tuned models (also known as Chat models) requiring specific prompt formats

Using a Dartmouth-hosted base language model:

from dartmouth_langchain.llms import DartmouthLLM

llm = DartmouthLLM(model_name="codellama-13b-hf")

response = llm.invoke("Write a Python script to swap two variables.")
print(response)

Using a Dartmouth-hosted chat model:

from dartmouth_langchain.llms import ChatDartmouth


llm = ChatDartmouth(model_name="llama-3-8b-instruct")

response = llm.invoke("Hi there!")

print(response.content)

Note: The required prompt format is enforced automatically when you are using ChatDartmouth.

Embeddings model

Using a Dartmouth-hosted embeddings model:

from dartmouth_langchain import DartmouthEmbeddingsModel


embeddings = DartmouthEmbeddingsModel()

embeddings.embed_query("Hello? Is there anybody in there?")

Reranking

Using a Dartmouth-hosted reranking model:

from dartmouth_langchain.retrievers.document_compressors import DartmouthReranker
from langchain.docstore.document import Document


docs = [
    Document(page_content="Deep Learning is not..."),
    Document(page_content="Deep learning is..."),
    ]

query = "What is Deep Learning?"
reranker = DartmouthReranker(model_name="bge-reranker-v2-m3")
ranked_docs = reranker.compress_documents(query=query, documents=docs)

print(ranked_docs)

Available models

For a list of available models, check the documentation of the RESTful Dartmouth AI API.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dartmouth_langchain-0.2.0.tar.gz (8.5 kB view details)

Uploaded Source

Built Distribution

dartmouth_langchain-0.2.0-py3-none-any.whl (9.2 kB view details)

Uploaded Python 3

File details

Details for the file dartmouth_langchain-0.2.0.tar.gz.

File metadata

  • Download URL: dartmouth_langchain-0.2.0.tar.gz
  • Upload date:
  • Size: 8.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for dartmouth_langchain-0.2.0.tar.gz
Algorithm Hash digest
SHA256 8a5c15241026dd35f17bef5f967e270283103ca1b3b028cd8ac88110082a8611
MD5 12b525511276de9dca747b7195f41458
BLAKE2b-256 dfba01475206d07593051dcba2cf45162a111750355b3fd861f4496382f8cdc1

See more details on using hashes here.

File details

Details for the file dartmouth_langchain-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for dartmouth_langchain-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d70e74ab39deba1533c2b4807cae2483fc3689c71431ac2a2c6090af96d8256c
MD5 59487e3bc64ba5448f0641314db19cf3
BLAKE2b-256 0292393a60b2671180fa5bd19dd89edf2cdabc30c9232fa89bf6dcb59406fe15

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page