LLM utility of streaming token realtime replacement processing
Project description
TokFlow
Utility that outputs tokens generated by a large language model (LLM) with sequential replacement processing
How it works
The tokens are entered one after the other as small pieces as shown below.
["He","llo"," ","t","h","ere","!<","N","L>m","y ","nam","e"," ","is"," tokfl","ow.","<","N","L>N","ice"," to ","me","et you."]
The input tokens are output, with <NL>
replaced by \n
each time.
You can specify any string to be replaced. Moreover, you can specify multiple replacement targets.
What is this library for?
I developed this for the purpose of outputting special tokens with successive replacements in sequential sentence generation using a large-scale language model, which is a generative AI, but it may also be used for other string stream processing.
Install
pip install tokflow
Usage
import time
from tokflow import TokFlow
TOKEN_GENERATOR_MOCK = ["He", "llo", " ", "t", "h", "ere", "!<", "N", "L>m", "y ", "nam", "e", " ", "is", " tokfl", "ow.",
"<", "N", "L>N", "ice", " to ", "me", "et you."]
# replace "<NL>" with "\n". "<NL>" is called "search target string".
# Multiple replacement conditions can be specified.
tokf = TokFlow([("<NL>", "\n")])
for input_token in TOKEN_GENERATOR_MOCK:
output_token = tokf.put(input_token)
# Input sequential tokens.
# If there is a possibility that the token is a "search target string",
# it is buffered for a while, so output_token may be empty for a while.
print(f"{output_token}", end="", flush=True)
# Included wait to show the sequential generation operation.
time.sleep(0.3)
# Remember to output the remaining buffer at the very end. Buffers may be empty characters.
print(f"{tokf.flush()}", end="", flush=True)
Generation Options
The put
method can take an optional parameter opts
like put(text,opts)
.
opts
can specify the format of the input and output, like {"in_type":"spot","out_type:"spot" }
.
It behaves as follows:
in_type | out_type | Description |
---|---|---|
spot | spot | A mode that incrementally sends tokens to the put method, and outputs generated segments each time. |
spot | full | A mode that incrementally sends tokens to the put method, but outputs the full sentence. |
full | spot | A mode that sends the full sentence to the put method at once, but outputs generated segments each time. |
full | full | A mode that sends the full sentence to the put method at once, and outputs the full sentence. |
Notes:
- All text strings need to be sent to the
put
method before calling theflush
method. Especially infull
mode, all input strings are sent at once. - If the output type (
out_type
) isfull
, theflush
method must be called to obtain the final result. - It's important to appropriately combine the call pattern of the
put
method and the use of theflush
method to maintain consistency in each mode.
Code Example
Specify rules like condition = {"in_type": "full", "out_type": "full"}
, and use condition
as an argument for put
and flush
.
tokf = TokFlow([("<NL>", "\n")])
condition = {"in_type": "full", "out_type": "full"}
prev_len = 0
for input_token_base in get_example_texts():
output_sentence = tokf.put(input_token_base, condition)
print(f"output_sentence:{output_sentence}")
if prev_len > len(output_sentence):
raise ValueError("Length error")
if "<NL>" in output_sentence:
raise Exception("Failure Must be converted str found.")
prev_len = len(output_sentence)
output_sentence = tokf.flush(condition)
SentenceStop Class
The SentenceStop class is designed to detect specific keywords and stop text generation at the point where the keyword is found. It assumes a situation where text is input one character at a time.
Main Features
- Detection of specific keywords: Detects specific keywords within the string. The detected keywords are treated as stop strings.
- Stop text generation: Stops text generation at the position of the detected stop string. Specifically, it returns the text at the point where the stop string is detected.
- Real-time processing: Assumes a situation where strings are input one character at a time, enabling real-time processing.
How to use
Specify the keywords to stop at initialization. After that, input one character at a time with the put
method, and if a stop string is found, it returns the text at that point. When all inputs are finished, use the flush
method to perform the remaining processing.
Processing
About Internal processing
Tokens are sequentially read in real time. The token read is combined with the tokens read so far, referred to as the "token buffer". In this sequential process, when a pre-specified string (hereafter referred to as the "search target string") appears in the token buffer, this string is replaced with another string (hereafter referred to as the "replacement string"). Since tokens are read sequentially, in the intermediate stage, a string that is unrelated to the search target string or part of the search target string accumulates in the token buffer. If the token buffer is composed in an order that cannot be a search target string, the token buffer is returned as the method's return value the moment such a determination is made. On the other hand, if the token buffer is composed in an order that could be a search target string, the return value remains an empty string until either the search target string appears or it is determined that it cannot be a search target string. In this way, by buffering until the appearance of the search target string, most sequential tokens can be displayed as they are, while replacement is delayed when necessary, enabling stream processing.
TokFlow License
Open source license
The open source license has been specifically designed to enable the development of open source and personal projects using TokFlow. The open source license associated with TokFlow is the GNU General Public License version 3 (GPLv3). The GPLv3 has many terms, yet perhaps the most crucial is its 'sticky' nature when you distribute your work publicly. As outlined in the GPL FAQ:
"Upon releasing a modified version of your program to the public, the GPLv3 requires you to make the modified source code available to the users of your program, under the GPLv3."
Publicly releasing your project that utilises TokFlow under the GPLv3, in turn, requires your project to be licensed under the GPLv3. If you're comfortable with this, you're more than welcome to use TokFlow under the GPLv3, without the need to acquire a commercial license.
However, if you wish to include this library in your tool and distribute it under a license other than the GPLv3, or if you wish to distribute it for a fee, or should you want to use it for commercial purposes, obtaining a commercial license will be necessary. Please don't hesitate to contact us for discussion.
Commercial OEM License
If you want to include TokFlow as part of a commercial product, SDK, or toolkit,
choose the Commercial OEM license.
Commercial OEM licenses are customized for each customer. Contact riversun.org@gmail.com
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file tokflow-1.3.0.tar.gz
.
File metadata
- Download URL: tokflow-1.3.0.tar.gz
- Upload date:
- Size: 30.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7d9798cdb8478ec198dde4b06c3124e59e3dc03968ef7e190943e2de72c4965c |
|
MD5 | 18935605e195dfbaac77590a83069210 |
|
BLAKE2b-256 | f2a39cb73c99725e1f17f59e8e8d72b133d11ebcb5cbbe47b72ee51f0b66a496 |
File details
Details for the file tokflow-1.3.0-py3-none-any.whl
.
File metadata
- Download URL: tokflow-1.3.0-py3-none-any.whl
- Upload date:
- Size: 25.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ed7ec341e8cb0e74fe8111625f780a7b17321239919f03c014d5a563d80c2edb |
|
MD5 | d0b11659b1ee1ec4ebd42937ed4a532d |
|
BLAKE2b-256 | 88e534ab9bbc4bd92806ca5e7072cb684724cf5487fd79f863a256b649eda1ea |