Skip to main content

A simple library for interfacing with language models.

Project description

langdash

A simple library for interfacing with language models.

Currently in beta!

Features:

  • Support for guided text generation, text classification (through prompting) and vector-based document searching.
  • Lightweight, build-it-yourself-style prompt wrappers in pure Python, with no domain-specific language involved.
  • Token healing and transformers/RNN state reuse for fast inference, like in Microsoft's guidance.
  • First-class support for ggml backends.

Documentation: Read on readthedocs.io

Repository: main / Gitlab mirror

Installation

Use pip to install. By default, langdash does not come preinstalled with any additional modules. You will have to specify what you need like in the following command:

pip install --user langdash[embeddings,sentence_transformers]

List of modules:

  • core:
    • embeddings: required for running searching through embeddings
  • backends:
    • Generation backends: rwkv_cpp, llama_cpp, ctransformers, transformers
    • Embedding backends: sentence_transformers

Note: If running from source, initialize the git submodules in the langdash/extern folder to compile foreign backends.

Usage

Examples:

See examples folder for full examples.

Running the Examples

All examples can be ran with the following command:

python examples/instruct.py [model type] [model name or path]

You can specify additional model parameters using the -ae CLI argument, and passing a valid Python literal. For example, to run the chat example using the WizardLM model with context length of 4096, do:

python examples/chat.py llama_cpp /path/to/ggml-wizardlm.bin -ae n_ctx 4096

Some examples require you to specify the prompt format. Formats include: wizardlm (shortened Alpaca format without the first prompt line and # Instruction:), and alpaca (the full format). You will need to specify it for most of the examples:

python examples/instruct.py llama_cpp /path/to/ggml-wizardlm.bin -ae n_ctx 4096 --prompt-format wizardlm

For a full list, see the examples/_instruct_format.py file.

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langdash-1.20.3.tar.gz (37.6 kB view details)

Uploaded Source

File details

Details for the file langdash-1.20.3.tar.gz.

File metadata

  • Download URL: langdash-1.20.3.tar.gz
  • Upload date:
  • Size: 37.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for langdash-1.20.3.tar.gz
Algorithm Hash digest
SHA256 377aa1eb3b3a4f36c9f702578fa4824522f7a30db845b9e99c110f62295b67b4
MD5 e7ab332a2d9b7a0967f8c3de25a17530
BLAKE2b-256 9d003dda3f89ab671b0be228cf069876182a6f8e92323240f665133eb1ba6693

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page