Skip to main content

Library for creating encoders pipes

Project description

Introduction

Modern encoders have more than one stage of encoding. Developers manually create pipes for converting their text in vector. But some steps are atomic bricks and can be reuseful. Beside it some encoding takes a lot of time and we reinvent caches. Encoder library provide simple way initialization and pipeline construction.

Install

pip install encoder-lib[bert_embedded,bert_client]

Get started

Let's create bert thin client for bert-as-service

from encoders.encoder_factory import EncoderFactory

encoder_conf_dict = {
    "default": {
        "type": "bert_client",
        "input_dim": 1,
        "output_dim": 768,
        "params": {
            "port": 5555,
            "port_out": 5556,
            "ip": "localhost",
            "timeout": 5000, 
        }
   }
}
encoder_factory = EncoderFactory(encoder_conf_dict)

encoder = encoder_factory.get_encoder("default")
documents_list = ["Hello World!"]
vectors = encoder.encode(documents_list)

Coll, we have encoder, but each request over network takes time. Let's enhance the encoder and add simple in-memory in cache

from encoders.encoder_factory import EncoderFactory

encoder_conf_dict = {
    "default": {
        "type": "bert_client",
        "input_dim": 1,
        "output_dim": 768,
        "params": {
            "port": 5555,
            "port_out": 5556,
            "ip": "localhost",
            "timeout": 5000, 
        },
        "cache": {
            "type": "simple"
        }
   }
}
encoder_factory = EncoderFactory(encoder_conf_dict)

encoder = encoder_factory.get_encoder("default")
documents_list = ["Hello World!"]
# Encoder sends request over network
vectors = encoder.encode(documents_list)
# This call takes vector from cache 
vectors = encoder.encode(documents_list)

Simple cache stores data in memory without any memory restriction. Beside it we can keep time on warming up and load pre-computed vectors from file:

encoder_conf_dict = {
    "default": {
        "type": "bert_client",
        "input_dim": 1,
        "output_dim": 768,
        "params": {
            "port": 5555,
            "port_out": 5556,
            "ip": "localhost",
            "timeout": 5000, 
        },
        "cache": {
            "type": "simple",
            "params": {
                "path_desc": {
                    "type": "absolute",
                    "file": "/cache/bert_cache.pkl"
                }
            }
        }
   }
}

Path object

Path object is flexible description of file location. Current path object version supports:

  1. Absolute path - allow to specify full path to file

    path_desc:
      type: absolute
      file: full_file_path
    
  2. Relative path - allow to specify relative path to file. We separate full file name on two parts relative and base. Relative part is stored in param "file". Base part is stored in OS environment variable and make you config transferable to other computers.

    path_desc:
      type: relative
      file: relative_file_name
      os_env: ENV_VAR
    

    Examples

    path_desc:
      type: relative
      os_env: BERT_HOME
      file: "cache/bert_cache.pkl"
    

Supported vectorisers

  1. Bert-as-Service client
  2. Bert embedded
  3. TF-IDF
  4. Composite vectoriser
example_bert_client:
  type: bert_client
  input_dim: 1
  output_dim: 768
  params:
    port: 5555
    port_out: 5556
    ip: localhost
    timeout: 5000

example_bert_embedded:
  type: bert_embedded
  verbose: True
  input_dim: 1
  output_dim: 768
  params:
    graph:
      path_desc:
        type: relative
        os_env: BERT_HOME
        file: model_for_inference.pbtxt
    vocab:
      path_desc:
        type: relative
        os_env: BERT_HOME
        file: vocab.txt

example_composite:
  type: composite
  params:
    encoders:
      - example_bert_client

example_tf_idf:
    type: tfidf
    params:
      path_desc:
        type: absolute
        file: /dumped_tf_idf/model.pkl

Release notes

1.2

  1. Added parameter verbose for BaseEncoder and all child classes
  2. Added method simple_dump_to_pickle for dumping EncoderCache.

1.0

  1. Added base functionality for Bert and TF-IDF encoders

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

encoder_lib-1.2.tar.gz (5.7 kB view details)

Uploaded Source

Built Distribution

encoder_lib-1.2-py3-none-any.whl (8.8 kB view details)

Uploaded Python 3

File details

Details for the file encoder_lib-1.2.tar.gz.

File metadata

  • Download URL: encoder_lib-1.2.tar.gz
  • Upload date:
  • Size: 5.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.2

File hashes

Hashes for encoder_lib-1.2.tar.gz
Algorithm Hash digest
SHA256 bf8e3fdee22fc184f69a749bbc61522e6617afb4722085bf019e8698f6bcbbfd
MD5 8cb4b9d68fb596728ad1aefbdb7c8647
BLAKE2b-256 61b12fe7bd2e33662182dd62b25269b003535c0e1ccee87babc31eca29594fa6

See more details on using hashes here.

File details

Details for the file encoder_lib-1.2-py3-none-any.whl.

File metadata

  • Download URL: encoder_lib-1.2-py3-none-any.whl
  • Upload date:
  • Size: 8.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.2

File hashes

Hashes for encoder_lib-1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 8c091216fd4c05d69a14da8be44f30e2a74d132414daccec846ccf4b33253801
MD5 dd0ccade1dd64c44fbe755514aaebe1c
BLAKE2b-256 15376dfe1055d31ad1f1fc302a08aad976967c513ccf6bf89026acea5661357e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page