Visualization tool for various generation tasks on Language Models.
Project description
Conditional Language Model Generation Visualization
- when evaluating language models it is often pain to see what is generated and why
- this little package is a
vue.js
frontend together withflask
backend and it is designed to easily show some interesting visualizations on conditional generation models - it handles frontend-backend communication as well as frontend rendering
- hence the developper can focus only on ML aspects of his work!
VERSION: 0.3.0
changelog
- added
visuallm.components.ChatComponent
, so you can now chat with the models! - added support for OpenAI API, so providing the token, you can chat with the OpenAI models!
- refactoring and better code quality
- more documentation strings
Installation
- install from pypi:
pip install visuallm
Usage
The documentation is a WIP as of now, however here you can see several snippets of what the library can do.
If you want to use the app with the personachat dataset, you can play with prepared example by running:
flask --app examples_py.persona_chat_example.app run
Generation Playground
Select which parameters you want to use for generation, plug in a HuggingFace
model, or an OpenAI
token and have fun with experimenting with various generation hyperparameters!
Chat Playground
Select which parameters you want to use for generation, plug in a HuggingFace
model, or an OpenAI
token and have fun with chatting with the model!
Visualize Next Token Predictions
By using visuallm.components.NextTokenPredictionComponent.NextTokenPredictionComponent
you can just plug the HuggingFace model in and go through the generation process step by step.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.