Skip to main content

Library for creating encoders pipes

Project description

Introduction

Modern encoders have more than one stage of encoding. Developers manually create pipes for converting their text in vector. But some steps are atomic bricks and can be reuseful. Beside it some encoding takes a lot of time and we reinvent caches. Encoder library provide simple way initialization and pipeline construction.

Install

pip install encoder-lib[bert_embedded,bert_client]

Get started

Let's create bert thin client for bert-as-service

from encoders.encoder_factory import EncoderFactory

encoder_conf_dict = {
    "default": {
        "type": "bert_client",
        "input_dim": 1,
        "output_dim": 768,
        "params": {
            "port": 5555,
            "port_out": 5556,
            "ip": "localhost",
            "timeout": 5000, 
        }
   }
}
encoder_factory = EncoderFactory(encoder_conf_dict)

encoder = encoder_factory.get_encoder("default")
documents_list = ["Hello World!"]
vectors = encoder.encode(documents_list)

Coll, we have encoder, but each request over network takes time. Let's enhance the encoder and add simple in-memory in cache

from encoders.encoder_factory import EncoderFactory

encoder_conf_dict = {
    "default": {
        "type": "bert_client",
        "input_dim": 1,
        "output_dim": 768,
        "params": {
            "port": 5555,
            "port_out": 5556,
            "ip": "localhost",
            "timeout": 5000, 
        },
        "cache": {
            "type": "simple"
        }
   }
}
encoder_factory = EncoderFactory(encoder_conf_dict)

encoder = encoder_factory.get_encoder("default")
documents_list = ["Hello World!"]
# Encoder sends request over network
vectors = encoder.encode(documents_list)
# This call takes vector from cache 
vectors = encoder.encode(documents_list)

Simple cache stores data in memory without any memory restriction. Beside it we can keep time on warming up and load pre-computed vectors from file:

encoder_conf_dict = {
    "default": {
        "type": "bert_client",
        "input_dim": 1,
        "output_dim": 768,
        "params": {
            "port": 5555,
            "port_out": 5556,
            "ip": "localhost",
            "timeout": 5000, 
        },
        "cache": {
            "type": "simple",
            "params": {
                "path_desc": {
                    "type": "absolute",
                    "file": "/cache/bert_cache.pkl"
                }
            }
        }
   }
}

Path object

Path object is flexible description of file location. Current path object version supports:

  1. Absolute path - allow to specify full path to file

    path_desc:
      type: absolute
      file: full_file_path
    
  2. Relative path - allow to specify relative path to file. We separate full file name on two parts relative and base. Relative part is stored in param "file". Base part is stored in OS environment variable and make you config transferable to other computers.

    path_desc:
      type: relative
      file: relative_file_name
      os_env: ENV_VAR
    

    Examples

    path_desc:
      type: relative
      os_env: BERT_HOME
      file: "cache/bert_cache.pkl"
    

Supported vectorisers

  1. Bert-as-Service client
  2. Bert embedded
  3. TF-IDF
  4. Composite vectoriser
example_bert_client:
  type: bert_client
  input_dim: 1
  output_dim: 768
  params:
    port: 5555
    port_out: 5556
    ip: localhost
    timeout: 5000

example_bert_embedded:
  type: bert_embedded
  input_dim: 1
  output_dim: 768
  params:
    graph:
      path_desc:
        type: relative
        os_env: BERT_HOME
        file: model_for_inference.pbtxt
    vocab:
      path_desc:
        type: relative
        os_env: BERT_HOME
        file: vocab.txt

example_composite:
  type: composite
  params:
    encoders:
      - example_bert_client

example_tf_idf:
    type: tfidf
    params:
      path_desc:
        type: absolute
        file: /dumped_tf_idf/model.pkl

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

encoder_lib-1.1.tar.gz (5.3 kB view details)

Uploaded Source

Built Distribution

encoder_lib-1.1-py3-none-any.whl (8.0 kB view details)

Uploaded Python 3

File details

Details for the file encoder_lib-1.1.tar.gz.

File metadata

  • Download URL: encoder_lib-1.1.tar.gz
  • Upload date:
  • Size: 5.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.2

File hashes

Hashes for encoder_lib-1.1.tar.gz
Algorithm Hash digest
SHA256 6198e53f43ed00f7853bdf0131b5ace1b807926b83b8a9e35338ec2d122ccda2
MD5 083b1fec39a1b61372f52cbd842c8616
BLAKE2b-256 06128d6413024ca032d926b6930a69c284ada0ed07d787feacf3cd9a8bc8fe52

See more details on using hashes here.

File details

Details for the file encoder_lib-1.1-py3-none-any.whl.

File metadata

  • Download URL: encoder_lib-1.1-py3-none-any.whl
  • Upload date:
  • Size: 8.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.2

File hashes

Hashes for encoder_lib-1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 33cd08e9c61efae77bf40087d6df38e064412b60d0040c0e053eb187d75fd532
MD5 5ed107d7620cc6ed1b4f4ff382108889
BLAKE2b-256 7118b680a33b81913265a52a8ecb73813bc5118ff83b0f8a9fadb9672a833005

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page