Skip to main content

Download LLMs to use locally.

Project description

Unplash / Sean Sinclair

getllms - 1.2

The LLMs index. Uses the LMStudio Catalog.

$ pip install getllms


In This Version

What's New

  • 0.5: Support for Notebooks

  • 0.4: Minor fixes

  • 1.1: Download updates

  • 1.2: Added CLI

List All LLMs.

List all LLMs available for use. Selected for you.

import getllms

models = getllms.list_models()
Output

[
  Model(
    name="Google's Gemma 2B Instruct", 
    description='** Requires LM Studio 0.2.15 or new…', 
    files=[ (1) ]
  ), 
  Model(
    name='Mistral 7B Instruct v0.2', 
    description='The Mistral-7B-Instruct-v0.2 Large …', 
    files=[ (2) ]
  ),
  ...
]


See Trained LLMs.

Get the trained ones for a specific model. Select the one that meets your system requirements.

# select Google's Gemma 2B Instruct
model = models[0]
model.files
Output & More

Output

FileCollection(
  best=ModelFile(
    name='gemma-2b-it-q8_0.gguf', 
    size=2669351840, 
    url='https://huggingface.co/lmstudi…'
  ),
  +0
)

More

Additionally, you can see all the available model files:

model.files.all # [ ModelFile(...), ... ]


Download LLMs.

Download the LLM that's right for you.

model.download("model.bin")
Output

  0.0% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 900.0KB / 2.5GB (8.7MB/s)


Unsplash / Milad Fakurian

CLI

Learn how to use the CLI from the help command or refer to this page.

$ getllms

List Models.

To list all models, use the getllms list command. Note that this truncates down to showing only 5 models. If you wish to list all of them, use getllms list all instead.

$ getllms list
Output

Google's Gemma 2B Instruct

** Requires LM Studio 0.2.15 or newer ** Gemma is a family of lightweight LLMs built from the same research and technology Google used to create the Gemini models. Gemma models are available in two sizes, 2 billion and 7 billion parameters. These models are trained on up to 6T tokens of primarily English web documents, mathematics, and code, using a transformer architecture with enhancements like Multi-Query Attention, RoPE Embeddings, GeGLU Activations, and advanced normalization techniques.


Mistral 7B Instruct v0.2

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1. For full details of this model read MistralAI's blog post and paper.

(...)


Download Models.

To download a model, use getllms <model name>:

$ getllms "Google's Gemma 2B Instruct"
Specify model size

If you wish to specify the model's size (economical/best), just add the desired size inside of square brackets after the name of the model.

$ getllms "Google's Gemma 2B Instruct [economical]"



© 2023 AWeirdDev. Catalog by LMStudio

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

getllms-1.2.tar.gz (28.8 kB view details)

Uploaded Source

File details

Details for the file getllms-1.2.tar.gz.

File metadata

  • Download URL: getllms-1.2.tar.gz
  • Upload date:
  • Size: 28.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.8

File hashes

Hashes for getllms-1.2.tar.gz
Algorithm Hash digest
SHA256 c4a2d832ed771c4a11f88c4a9d3f3ef771b198903bc6ee9bfd8cf4116b8c8eaf
MD5 c36f180159d8699d3371584886025277
BLAKE2b-256 fcaaf50c5113563884c91abba1b3a3f533e42652a5129d2e3ed354c3c57945f8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page