Generate LangChain config quickly
Project description
doccano-mini
doccano-mini is a few-shot annotation tool to assist the development of applications with Large language models (LLMs). Once you annotate a few text, you can test your task (e.g. text classification) with LLMs, then download the LangChain's config.
Note: This is an experimental project.
Installation
pip install doccano-mini
Usage
For this example, we will be using OpenAI’s APIs, so we need to set the environment variable in the terminal.
export OPENAI_API_KEY="..."
If you want to change the model, set the environment variable in the terminal.
We use text-davinci-003
by default.
export OPENAI_MODEL_NAME="gpt-3.5-turbo"
Then, we can run the server.
doccano-mini
Now, we can open the browser and go to http://localhost:8501/
to see the interface.
Development
poetry install
streamlit run doccano_mini/app.py
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for doccano_mini-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1267193f636a092424857ce3de97e19fda1f72b7841e522839fa68ea453221ec |
|
MD5 | a0b8cb1015798266a3fc9fc525b982c6 |
|
BLAKE2b-256 | e1387a93cd0e54023b54d7bd5db73d90ed9a485e772def9ef8349d2d3e4284a9 |