esay use ChatGLM in LangChain.
Project description
langchain-ChatGLM
esay use ChatGLM in LangChain.
Install Requirement
pip install -r requirements.txt
Usage
from chatglm_pipline import ChatGLMPipeline
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = ChatGLMPipeline.from_model_id(
model_id="THUDM/chatglm2-6b",
device=-1, # if use GPU set to 0
model_kwargs={"temperature": 0, "max_length": 64, "trust_remote_code": True},
callback_manager=callback_manager,
verbose=True,
)
template = """问: {question}
答: 让我们一步一步地思考."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "华晨宇和张碧晨是什么关系?"
llm_chain.run(question)
model_kwargs
key | values | remark |
---|---|---|
"device" |
"cuda" |
使用cuda加速 |
"float" |
True , False |
使用cpu推理 |
"quantize" |
8 |
量化 |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for langchain_chatglm-0.0.13-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1689048395f86244e69d0610009e0c018295aa6a88fb3b6798a7c585dff93a1d |
|
MD5 | 812077a3eb2910f8f6e86304cf4b0278 |
|
BLAKE2b-256 | 95ade4a49844bf90b91e68e5f87975cd7de77718aea9c63720358a88dc089e9c |