No project description provided
Project description
langchain-utils
LangChain Utilities
Prompt generation using LangChain document loaders
Optimized to feed into a chat interface (like ChatGPT) manually in one or multiple (to get around context length limits) goes.
urlprompt
$ urlprompt --help
usage: urlprompt [-h] [-V] [-c] [-m model] [-S] [-M] [-s chunk_size] [-w WHAT] [-j] [-n] URL
Get a prompt consisting the text content of a webpage
positional arguments:
URL URL
options:
-h, --help show this help message and exit
-V, --version show program's version number and exit
-c, --copy Copy the prompt to clipboard (default: False)
-m model, --model model
Model to use (default: gpt-3.5-turbo)
-S, --split Split the prompt into multiple parts (default: False)
-M, --merge Merge contents of all pages before processing (default: False)
-s chunk_size, --chunk-size chunk_size
Chunk size when splitting transcript, also used to determine whether to split (default: 2000)
-w WHAT, --what WHAT Initial knowledge you want to insert before the PDF content in the prompt (default: the content of a PDF file)
-j, --javascript Use JavaScript to render the page (default: False)
-n, --dry-run Dry run (default: False)
pdfprompt
$ pdfprompt --help
usage: pdfprompt [-h] [-V] [-c] [-m model] [-S] [-M] [-s chunk_size] [-w WHAT]
[-n]
PDF Path
Get a prompt consisting the text content of a PDF file
positional arguments:
PDF Path Path to the PDF file
options:
-h, --help show this help message and exit
-V, --version show program's version number and exit
-c, --copy Copy the prompt to clipboard (default: False)
-m model, --model model
Model to use (default: gpt-3.5-turbo)
-S, --split Split the prompt into multiple parts (default: False)
-M, --merge Merge contents of all pages before processing
(default: False)
-s chunk_size, --chunk-size chunk_size
Chunk size when splitting transcript, also used to
determine whether to split (default: 2000)
-w WHAT, --what WHAT Initial knowledge you want to insert before the PDF
content in the prompt (default: the content of a PDF
file)
-n, --dry-run Dry run (default: False)
ytprompt
$ ytprompt --help
usage: ytprompt [-h] [-V] [-c] [-m model] [-S] [-s chunk_size] [-n]
YouTube URL
Get a prompt consisting Title and Transcript of a YouTube Video
positional arguments:
YouTube URL YouTube URL
options:
-h, --help show this help message and exit
-V, --version show program's version number and exit
-c, --copy Copy the prompt to clipboard (default: False)
-m model, --model model
Model to use (default: gpt-3.5-turbo)
-S, --split Split the prompt into multiple parts (default: False)
-s chunk_size, --chunk-size chunk_size
Chunk size when splitting transcript, also used to
determine whether to split (default: 2000)
-n, --dry-run Dry run (default: False)
Installation
pipx
This is the recommended installation method.
$ pipx install langchain-utils
pip
$ pip install langchain-utils
Develop
$ git clone https://github.com/tddschn/langchain-utils.git
$ cd langchain-utils
$ poetry install
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
langchain_utils-0.2.1.tar.gz
(6.4 kB
view hashes)
Built Distribution
Close
Hashes for langchain_utils-0.2.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | dcb6efce9a4e7f4967a56f1226a531103df953e20e42f7645706efb308db0bc9 |
|
MD5 | e012772ba7bc1ef6c1a9c2e45d0dcd6a |
|
BLAKE2b-256 | 3f69b51be3a2f6b208f345e1f284d794fdee3ec5690f7c43ab516adc89146ad5 |