No project description provided
Project description
🌸 Nagato
Nagato is a framework that enables any developer to streamline the creation of fine-tuned embedding and language models specifically tailored to a given corpus of data
Quick Start Guide • Features • Key benefits • How it works
Features
- Data ingestion from various formats such as JSON, CSV, TXT, PDF, etc.
- Data embedding using pre-trained or finetuned models.
- Storage of embedded vectors
- Automatic generation of question/answer pairs for model finetuning
- Built in code interpreter
- API concurrency for scalalbility and performance
- Workflow management for ingestion pipelines
Key benefits
-
Faster inference: Generic models often bring overhead in terms of computational time due to their broad-based training. In contrast, our fine-tuned models are optimized for specific domains, enabling faster inference and more timely results.
-
Lower costs: Utilizing fine-tuned models tailored for a specific corpus minimizes the number of tokens needed for accurate understanding and response generation. This reduction in token count translates to decreased computational costs and thus lower operational expenses.
-
Better results: Fine-tuned models offer superior performance on specialized tasks when compared to generic, all-purpose models. Whether you're generating embeddings or answering complex queries, you can expect more accurate and contextually relevant outcomes.
How it works
Nagato utilizes distinct strategies to process structured and unstructured data, aiming to produce fine-tuned models for both types. Below is a breakdown of how this is accomplished:
Unstructured data:
-
Selection of Embedding Model: The first step involves a careful analysis of the textual content to select an appropriate text-based embedding model. Based on various characteristics of the corpus such as vocabulary, context, and domain-specific jargon, Nagato picks the most suitable pre-trained text-based model for embedding.
-
Fine-Tuning the Embedding Model: Once the initial text-based model is selected, it is then fine-tuned to align more closely with the specific domain or subject matter of the corpus. This ensures that the embeddings generated are as accurate and relevant as possible.
-
Fine-Tuning the Language Model: After generating and storing embeddings, Nagato creates question-answer pairs for the purpose of fine-tuning a GPT-based language model. This yields a language model that is highly specialized in understanding and generating text within the domain of the corpus.
Structured data:
-
Sandboxed REPL: Nagato features a secure, sandboxed Read-Eval-Print Loop (REPL) environment to execute code snippets against the structured text data. This facilitates flexible and dynamic processing of structured data formats like JSON, CSV or XML.
-
Evaluation/Prediction Using a Code Interpreter: Post-initial processing, a code interpreter evaluates various code snippets within the sandboxed environment to produce predictions or analyses based on the structured text data. This capability allows the extraction of highly specialized insights tailored to the domain or subject matter.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file nagato_ai-0.0.13.tar.gz
.
File metadata
- Download URL: nagato_ai-0.0.13.tar.gz
- Upload date:
- Size: 10.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.1 CPython/3.11.4 Darwin/22.4.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5e8627276ed21cfac21dbb39ac256add51f9db1338fe2b37ad1eb380ed9a071e |
|
MD5 | 1e81105a277c7e4d7955efc48004b5ea |
|
BLAKE2b-256 | 2939b10b3162abcddfd21120aaa390980c9e383732d76f3efe7718a502c13cf1 |
File details
Details for the file nagato_ai-0.0.13-py3-none-any.whl
.
File metadata
- Download URL: nagato_ai-0.0.13-py3-none-any.whl
- Upload date:
- Size: 10.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.1 CPython/3.11.4 Darwin/22.4.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0ef1b4ef307860d262881e033e70e59be1453ca014c92538d90ca7208730773a |
|
MD5 | bb54456c9f96b7272794d9f6bdf74b9a |
|
BLAKE2b-256 | 21af50013e1ceac74fc02bde72888285910ca7a0f58fb02f5771fadec7c1f5de |