Skip to main content

Code to run the Extra algorithm for unsupervised topic extraction.

Project description

codecov Code style: black Imports: isort GitHub license PyPI

extra-model

Code to run the Extra algorithm for unsupervised topic/aspect extraction on English texts.

Read the Official Documentation here

Quick start

IMPORTANT:

  1. When running Extra inside docker-container, make sure that Docker process has enough resources. For example, on Mac/Windows it should have at least 8 Gb of RAM available to it. Read More about RAM Requirements
  2. GitHub repo does not come with Glove Embeddings. See section Downloading Embeddings for how to download the required embeddings.

Using docker-compose

First, build the image:

docker-compose build

Then, run following command to make sure that extra-model was installed correctly:

docker-compose run test

Downloading Embeddings

Next step is to download the embeddings (we use Glove from Stanford in this project).

To download the required embeddings, run the following command:

docker-compose run --rm setup

The embeddings will be downloaded, unzipped and formatted into a space-efficient format. Files will be saved in the embeddings/ directory in the root of the project directory. If the process fails, it can be safely restarted. If you want to restart the process with new files, delete all files except README.md in the embeddings/ directory.

[Optional] Run docker-compose build again

After you've downloaded the embeddings, you may want to run docker-compose build again. This will build an image with embeddings already present inside the image.

The tradeoff here is that the image will be much bigger, but you won't spend ~2 minutes each time you run extra-model waiting for embeddings to be mounted into the container. On the other hand, building an image with embeddings in the context will increase build time from ~3 minutes to ~10 minutes.

Run extra-model

Finally, running extra-model is as simple as:

docker-compose run extra-model /package/tests/resources/100_comments.csv

NOTE: when using this approach, input file should be mounted inside the container. By default, everything from extra-model folder will be mounted to /package/ folder. This can be changed in docker-compose.yaml

This will produce a result.csv file in /io/ (default setting) folder.

Location of the output can be changed by supplying second path, e.g.:

docker-compose run extra-model /package/tests/resources/100_comments.csv /io/another_folder

The output filename can also be changed if you want it to be something else than result.csv by supplying a third argument:

docker-compose run extra-model /package/tests/resources/100_comments.csv /io/another_folder another_filename.csv

More examples, as well as an explanation of input/output are available in official documentation.

Using command line

Install extra-model

First, install extra-model via pip:

pip install extra-model

Downloading Embeddings

Next, run the following to download and set up the required embeddings (we use Glove from Stanford in this project):

extra-model-setup

The embeddings will be downloaded, unzipped and formatted into a space-efficient format and saved in /embeddings.

If the process fails, it can be safely restarted. If you want to restart the process with new files, delete all files except README.md in the embeddings directory.

Run extra-model

Once set up, running extra-model is as simple as:

extra-model tests/resources/100_comments.csv

This will produce a result.csv file in /io. If you want to change the output directory this can be done by providing it as a second argument to extra-model like so:

extra-model tests/resources/100_comments.csv /path/to/store/output

The output filename can also be changed if you want it to be something else than result.csv by supplying a third argument to extra-model:

docker-compose run extra-model tests/resources/100_comments.csv /path/to/store/output another_filename.csv

Using as a Python package

Install extra-model

First, install extra-model via pip:

pip install extra-model

Downloading Embeddings

Next, use either the extra-model-setup CLI or docker-compose to download and set up the required embeddings (we use Glove from Stanford in this project):

extra-model-setup

or

docker-compose run --rm setup

The embeddings will be downloaded, unzipped and formatted into a space-efficient format. For the Docker based workflow, the embeddings will be saved to the embeddings directory. For the CLI workflow, by default, files will be saved in /embeddings. You can set another directory by providing it as an argument when running extra-model-setup like so:

extra-model-setup /path/to/store/embeddings

If the process fails, it can be safely restarted. If you want to restart the process with new files, delete all files except README.md in the embeddings directory.

Use extra-model as a Python package

Once set up, you can use extra-model by calling the run() function in extra_model/_run.py :

from extra_model._run import run

run(
    input_path=Path("input/path/file.csv"),
    output_path=Path("output/path")
)

This will process input/path and produce a result.csv file in output/path. If you want to change the output filename to be something different than result.csv, you can do os by providing an additional argument to run():

from extra_model._run import run

run(
    input_path=Path("input/path"),
    output_path=Path("output/path"),
    output_filename=Path("output_filename.csv")
)

More examples, as well as an explanation of input/output are available in official documentation.

Authors

extra-model was written by mbalyasin@wayfair.com, mmozer@wayfair.com.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

extra-model-0.2.0.tar.gz (34.2 kB view details)

Uploaded Source

Built Distribution

extra_model-0.2.0-py3-none-any.whl (39.3 kB view details)

Uploaded Python 3

File details

Details for the file extra-model-0.2.0.tar.gz.

File metadata

  • Download URL: extra-model-0.2.0.tar.gz
  • Upload date:
  • Size: 34.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.7.3 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.9.2

File hashes

Hashes for extra-model-0.2.0.tar.gz
Algorithm Hash digest
SHA256 e5782bb7478bd115ea86fb7f082a7b269d959c2d1b1dd8bdb95c746af16e093b
MD5 263335d66931bcc47d8a92771b53df52
BLAKE2b-256 c765c8e8c8d9d4040e1fbb9625adadeaca30187db73ca3af1ccdf2c85c09e7dd

See more details on using hashes here.

File details

Details for the file extra_model-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: extra_model-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 39.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.7.3 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.9.2

File hashes

Hashes for extra_model-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 99350414ba03ad84f2254e90b278a3a7b499b6f11e6006827711f66925b40f64
MD5 94668f79bee3539570a2595e326cc83a
BLAKE2b-256 fed120ae0f415a77489a4859b8a0c3bf810c8d564620d85f8b849a37b1b5ae01

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page