Skip to main content

Python client library for the NeMo LLM API

Project description

nemollm

Introduction

NeMo LLM Service offers state-of-the-art LLMs that were pre-trained on internet-scale text corpora.
With NeMo LLM Service API users can invoke the services from within their application code.
These models can be flexibly adapted to solve almost any language processing task for your use cases. You can conveniently and quickly try them out, and via an API that you can easily integrate into your applications.
Further, the NeMo LLM Service also offers customization capabilities, where the models can be effectively adapted to new tasks, using your own uploaded data.

Feature summary

  • Text Completion. With one of the available and pre-trained model the LLM service responds to an input prompt by generating an extension to the provided intut text, that is, a completion. This technique can be used for solving multiple NLP tasks using Zero/Few-shot learning techniques.
  • Model Customization. With which you can finetune an existing model on your own custom data in the form of prompt+completion pairs. This enhances the model’s ability to adapt to your use cases by ingesting hundreds to thousands of domain-specific examples.

Requirements.

Python >=3.6

Installation & Usage

pip install

pip install nemollm

Or if you have this folder locally and wish to develop with it,

pip install -e .

Authenticate with NGC API KEY

export NGC_API_KEY=<your_ngc_api_key>
# optional - to specify an org id that is not your default org id
export NGC_ORG_ID=<your_ngc_org_id>

Usage with Python

from nemollm.api import NemoLLM
import os

# Basic instantiation
conn = NemoLLM()

# Advanced instantiation, when defaults need to be modified
conn = NemoLLM(api_key=os.getenv("NGC_API_KEY"), org_id=os.getenv("NGC_ORG_ID"), api_host=<api_host>/<base_url>)

# Text completion
response = conn.generate(model="gpt5b", prompt="Winnie the Pooh")
print(response)

# Multiple async text completion 
responses = conn.generate_multiple(model="gpt5b", prompts=["Winnie the Pooh", "Scooby Doo", "Spongebob Squarepants"])
print(responses)

# Single async text completion
future = conn.generate(model="gpt5b", prompt="Winnie the Pooh", return_type="async")
response = future.result()
response = NemoLLM.post_process_generate_response(response)
print(response)

# File upload 
response = conn.upload("path/to/local/jsonl/file")
print(response)

# Create customization
response = conn.create_customization(
  model="gpt5b",
  name="training job name",
  training_dataset_file_id="training_dataset_file_id", 
  validation_dataset_file_id="validation_dataset_file_id"
)
print(response)

Each function above has additional params that can be customized. Please check nemollm/api_calls.py for further details

Other functions including {download, delete, list, get_info} customization, {get_info, delete} file are also available. Please check nemollm/api_calls.py for usage

Usage with CLI

Every function (except generate_multiple) is also supported via CLI

# Text generation
nemollm generate -p "Winnie the Pooh" -m gpt5b

# File upload
nemollm upload -f "<path/to/local/jsonl/file>"

# Create customization
nemollm create_customization -m gpt5b -t "<training/jsonl/file/id>" -v "<validation/jsonl/file/id>" --name "training job name" -epochs 1

Each function above has additional params that can be customized. Please check nemollm/cli.py for further details

Other functions including {download, delete, list, get_info} customization, {download, delete} file are also available. Please check nemollm/cli.py for usage

Author

nvidia-nemollm@nvidia.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

nemollm-0.3.5-py3-none-any.whl (11.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page