Skip to main content

Task-oriented finetuning for better embeddings on neural search.

Project description



Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications.

Task-oriented finetuning for better embeddings on neural search

PyPI

Fine-tuning is an effective way to improve the performance on neural search tasks. However, it is non-trivial for many deep learning engineers.

Finetuner makes fine-tuning easier, faster and performant by streamlining the workflow and handling all complexity and infrastructure on the cloud. With Finetuner, one can easily uplift pre-trained models to be more performant and production ready.

📈 Performance promise: uplift pretrained model and deliver SOTA performance on domain-specific neural search applications.

🔱 Simple yet powerful: easy access to 40+ mainstream losses, 10+ optimisers, layer pruning, weights freezing, dimensionality reduction, hard-negative mining, cross-modal model, distributed training.

All-in-cloud: instant training with our free GPU (Apply here for free!); manage runs, experiments and artifacts on Jina Cloud without worrying about provisioning resources, integration complexity and infrastructure.

Documentation

Benchmark

Model Task Metric Pretrained Finetuned Delta
BERT Quora Question Answering mRR 0.835 0.967 :arrow_up_small: 15.8%
Recall 0.915 0.963 :arrow_up_small: 5.3%
ResNet Visual similarity search on TLL mAP 0.102 0.166 :arrow_up_small: 62.7%
Recall 0.235 0.372 :arrow_up_small: 58.3%
CLIP Deep Fashion text-to-image search mRR 0.289 0.488 :arrow_up_small: 69.9%
Recall 0.109 0.346 :arrow_up_small: 217.0%

[*] All metrics evaluation on k@20, trained 5 epochs using Adam optimizer with learning rate of 1e-5.

Install

pip install -U finetuner

From 0.5.0, Finetuner computing is hosted on Jina Cloud. THe last local version is 0.4.1, one can install it via pip or check out git tags/releases here.

Get Started

The following code snippet describes how to fine-tune ResNet50 on Totally Looks Like dataset, it can be run as-is:

import finetuner
from finetuner.callback import EvaluationCallback

finetuner.login()

run = finetuner.fit(
    model='resnet50',
    run_name='resnet50-tll-run',
    train_data='tll-train-da',
    callbacks=[EvaluationCallback(query_data='tll-eval-da')],
)

Fine-tuning might take 5 minute to finish. You can later re-connect your run with:

import finetuner

finetuner.login()

run = finetuner.get_run('resnet50-tll-run')
print(run.status())
print(run.logs())

run.save_artifact('resnet-tll')

Specifically, the code snippet describes the following steps:

  • Login to Finetuner (Get free access here!)
  • Select backbone model, training and evaluation data for your evaluation callback.
  • Start the cloud run.
  • Monitor the status: check the status and logs of the run.
  • Save model for further use and integration.

Next steps

Intrigued? That's only scratching the surface of what Finetuner is capable of. Read our docs to learn more.

Support

Join Us

Finetuner is backed by Jina AI and licensed under Apache-2.0. We are actively hiring AI engineers, solution engineers to build the next neural search ecosystem in opensource.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

finetuner-0.5.1.tar.gz (25.7 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page