Skip to main content

Never train another ML model again

Project description

FlashLearn - Agents made simple

License: MIT Pure Python Test Coverage Code Size

FlashLearn provides a simple interface and orchestration (up to 1000 calls/min) for incorporating Agent LLMs into your typical workflows. Conduct data transformations, classifications, summarizations, rewriting, and custom multi-step tasks—just like you’d do with any standard ML library, but harnessing the power of LLMs under the hood.

  • Examples: click
  • Toolkits for advanced, prebuilt transformations: click
  • Customization options: click

Install:

pip install flashlearn

Learning a New “Task” from Sample Data

Like fit/predict pattern, you can quickly “learn” a custom skill from minimal (or no!) data. Provide sample data and instructions, then immediately apply to new inputs.

from flashlearn.skills.learn_skill import LearnSkill
from flashlearn.utils import imdb_reviews_50k

def main():
    # Instantiate your pipeline “estimator” or “transformer”
    learner = LearnSkill(model_name="gpt-4o-mini", clinet=OpenAI())
    data = imdb_reviews_50k(sample=100)

    # Provide instructions and sample data for the new skill
    skill = learner.learn_skill(
        data,
        task=(
            'Evaluate likelihood to buy my product and write the reason why (on key "reason")'
            'return int 1-100 on key "likely_to_Buy".'
        ),
    )

    # Construct tasks for parallel execution (akin to batch prediction)
    tasks = skill.create_tasks(data)
    results = skill.run_tasks_in_parallel(tasks)
    print(results)

Predefined Complex Pipelines in 3 Lines

Load prebuilt “skills” as if they were specialized transformers in a ML pipeline. Instantly apply them to your data:

# You can pass client to load your pipeline component
skill = GeneralSkill.load_skill(EmotionalToneDetection)
tasks = skill.create_tasks([{"text": "Your input text here..."}])
results = skill.run_tasks_in_parallel(tasks)

print(results)

Single-Step Classification Using Prebuilt Skills

Classic classification tasks are as straightforward as calling “fit_predict” on a ML estimator: • Toolkits for advanced, prebuilt transformations: click

import os
from openai import OpenAI
from flashlearn.skills.classification import ClassificationSkill

os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
data = [{"message": "Where is my refund?"}, {"message": "My product was damaged!"}]

skill = ClassificationSkill(
    model_name="gpt-4o-mini",
    client=OpenAI(),
    categories=["billing","product issue"],
    system_prompt="Classify the request."
)

tasks = skill.create_tasks(data)
print(skill.run_tasks_in_parallel(tasks))

Supported LLM Providers

Anywhere you might rely on an ML pipeline component, you can swap in an LLM:

client = OpenAI()  # This is equivalent to instantiating a pipeline component 
deep_seek = OpenAI(api_key='YOUR DEEPSEEK API KEY', base_url="https://api.deepseek.com")
lite_llm = FlashLiteLLMClient()  # LiteLLM integration Manages keys as environment variables, akin to a top-level pipeline manager

Target audience

  • Anyone needing LLM-based data transformations at scale
  • Data scientists tired of building specialized models with insufficient data

Support & Consulting


Key Features

  • 100% JSON Workflows: Every response is valid JSON—machine-friendly from the start.
  • Scales Easily: Concurrency and rate limiting for large-scale projects.
  • Zero Model Training: Leverage your LLM directly; define or reuse “skills.”
  • LearnSkill: Instant creation of classification/rewriting “skills” from sample data.
  • Batch or Real-Time: Process live data, or batch thousands of tasks.
  • Cost Estimation: Predict token expenditures before running large jobs.
  • Skill Library: 200+ built-in skill definitions (docs).
  • Multi-Modal: Handle text, images, and more in consistent JSON pipelines.

High-throughput

Process up to 999 tasks in 60 seconds on your local machine. For higher loads and enterprise needs contact us for enterprise solution. Enterprise Edition Demo

Processing tasks in parallel: 100%|██████████| 999/999 [01:00<00:00, 16.38 it/s, In total: 368969, Out total: 17288]
INFO:ParallelProcessor:All tasks complete. 999 succeeded, 0 failed.

High-Level Concept Flow

  1. Convert data rows into JSON tasks.
  2. Apply a Skill (classification, rewriting, etc.).
  3. LLM outputs guaranteed JSON.
  4. (Optional) Store or chain the JSON results into your next step.
Dataset Rows → JSON Tasks → FlashLearn Skills + LLM → Guaranteed JSON → DB/Next Steps

With guaranteed JSON, there’s no need to parse unstructured text. You can immediately save each result to your database or pass it to another pipeline stage.


Installation

  1. Install via PyPI:

    pip install flashlearn
    
  2. Set up your LLM provider (OpenAI, DeepSeek, etc.):

    export OPENAI_API_KEY="YOUR_API_KEY"
    

    Or define a base_url for an open-source endpoint.


“All JSON, All the Time”: Example Classification Workflow

The following example classifies IMDB movie reviews into “positive” or “negative” sentiment. Notice how at each step you can view, store, or chain the partial results—always in JSON format.

from flashlearn.utils import imdb_reviews_50k
from flashlearn.skills import GeneralSkill
from flashlearn.skills.toolkit import ClassifyReviewSentiment
import json
import os

def main():
    os.environ["OPENAI_API_KEY"] = "API-KEY"

    # Step 1: Load or generate your data
    data = imdb_reviews_50k(sample=100)  # 100 sample reviews

    # Step 2: Load JSON definition of skill in dict format
    skill = GeneralSkill.load_skill(ClassifyReviewSentiment)
        
    # Step 3: Save skill definition in JSON for later loading
    #skill.save("BinaryClassificationSkill.json")
    
    # Step 5: Convert data rows into JSON tasks
    tasks = skill.create_tasks(data)
    
    # Step 6: Save results to a JSONL file and run now or later
    with open('tasks.jsonl', 'w') as jsonl_file:
        for entry in tasks:
            jsonl_file.write(json.dumps(entry) + '\n')
            
    # Step 7: Run tasks (in parallel by default)
    results = skill.run_tasks_in_parallel(tasks)

    # Step 8: Every output is strict JSON
    # You can easily map results back to inputs
    # e.g., store results as JSON Lines
    with open('sentiment_results.jsonl', 'w') as f:
        for task_id, output in results.items():
            input_json = data[int(task_id)]
            input_json['result'] = output
            f.write(json.dumps(input_json) + '\n')

    # Step 9: Inspect or chain the JSON results
    print("Sample result:", results.get("0"))

if __name__ == "__main__":
    main()

The output is consistently keyed by task ID ("0", "1", etc.), with the JSON content that your pipeline can parse or store with no guesswork.


Use Cases & Real Examples

Discover & Classify Clusters

  1. DiscoverLabelsSkill automatically identifies emerging labels in text.
  2. Feed the discovered labels into a standard classification skill.
  3. Scale your labeling process quickly, with minimal human effort.

JSON Function Calls (GeneralSkill)

Skills define JSON schemas so the LLM must respond in valid JSON. Examples include rewriting text in comedic form, extracting docstrings, or providing bullet-point lists.

import json
from flashlearn.skills import GeneralSkill
from flashlearn.skills.toolkit import ClassifyDifficultyOfQuestion

def main():
    # Load a pre-defined skill from the toolkit
    skill = GeneralSkill.load_skill(ClassifyDifficultyOfQuestion)
    tasks = skill.create_tasks([{"text": "What is 12 squared?"}])
    results = skill.run_tasks_in_parallel(tasks)
    print(results)

Result is valid JSON like:

{"difficulty": "easy"}

No disclaimers or random text—just structured data. Support & Consulting


Image Classification Example

from flashlearn.skills.classification import ClassificationSkill

def main():
    images = [...]  # base64-encoded images
    skill = ClassificationSkill(
        model_name="gpt-4o-mini",
        categories=["cat", "dog"],
        system_prompt="Classify images as 'cat' or 'dog'."
    )
    column_modalities = {"image_base64": "image_base64"}
    tasks = skill.create_tasks(images, column_modalities=column_modalities)
    results = skill.run_tasks_in_parallel(tasks)
    print(results)

You’ll have JSON results, e.g. {"category": "cat"} for each image, which you can export as .jsonl or pass to another system. Support & Consulting


How It Works

Skills

A “Skill” encapsulates:

  • Which LLM to call (OpenAI, local, or custom).
  • System prompt or instructions.
  • JSON schema or format constraints.
  • Methods to parse the LLM response safely into JSON.

Tasks

Each piece of data is a “Task,” which is turned into a JSON prompt for the LLM. When run in parallel, tasks are grouped into requests respecting concurrency limits.

All JSON, All the Time

With FlashLearn, the LLM must output JSON. You’re never stuck parsing disclaimers, messy text, or swirling disclaimers. You can intercept and store partial or final results in JSON. This is critical for robust pipelines, especially when chaining multiple steps.

Parallel Execution & Cost Estimation

  • Parallel Execution: run_tasks_in_parallel organizes concurrent requests to the LLM.
  • Cost Estimation: Quickly preview your token usage:
    cost_estimate = skill.estimate_tasks_cost(tasks)
    print("Estimated cost:", cost_estimate)
    

Loading & Customizing a Skill

Here’s how the library handles comedic rewrites:

from flashlearn.skills import GeneralSkill
from flashlearn.skills.toolkit import HumorizeText

def main():
    data = [{"original_text": "We are behind schedule."}]
    skill = GeneralSkill.load_skill(HumorizeText)
    tasks = skill.create_tasks(data)
    results = skill.run_tasks_in_parallel(tasks)
    print(results)

You’ll see output like:

{
  "0": {
    "comedic_version": "Hilarious take on your sentence..."
  }
}

Everything is well-structured JSON, suitable for further analysis.


Contributing & Community

  • Licensed under MIT.
  • Fork us to add new skills, fix bugs, or create new examples.
  • We aim to make robust LLM workflows accessible to all startups.
  • All code needs at least 95% tests coverage
  • Explore the examples folder for more advanced usage patterns.

License

MIT License.
Use it in commercial products, personal projects.


“Hello World” Demos

Image Classification

import os
from openai import OpenAI
from flashlearn.skills.classification import ClassificationSkill
from flashlearn.utils import cats_and_dogs

def main():
    # os.environ["OPENAI_API_KEY"] = 'YOUR API KEY'
    data = cats_and_dogs(sample=6)

    skill = ClassificationSkill(
        model_name="gpt-4o-mini",
        client=OpenAI(),
        categories=["cat", "dog"],
        max_categories=1,
        system_prompt="Classify what's in the picture."
    )

    column_modalities = {"image_base64": "image_base64"}
    tasks = skill.create_tasks(data, column_modalities=column_modalities)
    results = skill.run_tasks_in_parallel(tasks)
    print(results)

    # Save skill definition for reuse
    skill.save("MyCustomSkillIMG.json")

if __name__ == "__main__":
    main()

Text Classification

import json
import os
from openai import OpenAI
from flashlearn.skills.classification import ClassificationSkill
from flashlearn.utils import imdb_reviews_50k

def main():
    # os.environ["OPENAI_API_KEY"] = 'YOUR API KEY'
    reviews = imdb_reviews_50k(sample=100)

    skill = ClassificationSkill(
        model_name="gpt-4o-mini",
        client=OpenAI(),
        categories=["positive", "negative"],
        max_categories=1,
        system_prompt="Classify short movie reviews by sentiment."
    )

    # Convert each row into a JSON-based task
    tasks = skill.create_tasks([{'review': x['review']} for x in reviews])
    results = skill.run_tasks_in_parallel(tasks)

    # Compare predicted sentiment with ground truth for accuracy
    correct = 0
    for i, review in enumerate(reviews):
        predicted = results[str(i)]['categories']
        reviews[i]['predicted_sentiment'] = predicted
        if review['sentiment'] == predicted:
            correct += 1

    print(f'Accuracy: {round(correct / len(reviews), 2)}')

    # Store final results as JSON Lines
    with open('results.jsonl', 'w') as jsonl_file:
        for entry in reviews:
            jsonl_file.write(json.dumps(entry) + '\n')

    # Save the skill definition
    skill.save("BinaryClassificationSkill.json")

if __name__ == "__main__":
    main()

Final Words

FlashLearn brings clarity to LLM workflows by enforcing consistent JSON output at every step. Whether you run a single classification or a multi-step pipeline, you can store partial results, debug easily, and maintain confidence in your data.
Support & Consulting

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flashlearn-1.0.1.tar.gz (77.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flashlearn-1.0.1-py3-none-any.whl (83.6 kB view details)

Uploaded Python 3

File details

Details for the file flashlearn-1.0.1.tar.gz.

File metadata

  • Download URL: flashlearn-1.0.1.tar.gz
  • Upload date:
  • Size: 77.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.1

File hashes

Hashes for flashlearn-1.0.1.tar.gz
Algorithm Hash digest
SHA256 75c0d21cb683de006baa8d13bb3eca1d1adbfed45dc41fe3aede267dbba1cb4c
MD5 ca99ba0f6746c14c7e360e93140f1b21
BLAKE2b-256 6ebfb5c72d63a0c23e1f12026dc1b05f165ae49968339bade3d679b5714fcb45

See more details on using hashes here.

File details

Details for the file flashlearn-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: flashlearn-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 83.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.1

File hashes

Hashes for flashlearn-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7aeb528600ece816d35bac210dd15abd07935cd97689d9a7be272f02e8862e97
MD5 4f3d0f4cae640c2cc7be43d30db7d2c2
BLAKE2b-256 d40c5b1ff8feee8ac93063d01c6ca09ed98aff5019e20aee8400ea1021ebcb49

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page