通义千问 Qianwen Langchain adapter
Project description
灵积-通义千问 Langchain
探索 通义千问 Api 在 langchain 中的使用 参考借鉴 openai langchain 的实现 目前在个人项目工具中使用
NOTE: langchian 已经带有了一个合并的 Tongyi 实现, 当时写这个项目的时候 Tongyi 的功能还不够完善,
不过随着后续的迭代应该已经没问题了 建议优先考虑通过以下方式使用
from langchain_community.llms.tongyi import Tongyi
from langchain_community.chat_models.tongyi import ChatTongyi
Install
pip 会同时安装依赖库: Langchain 和 Dashscope-SDK
pip install langchain-qianwen
Clone 项目 手动安装
git clone ... && cd langchain_qianwen
pip install -r requirements.txt
# 建议运行 pytest 单元测试确认功能运行正常,防止依赖库出现 breaking change
pip install pytest
pytest
使用前置条件:
- 了解 Langchain langchain文档
- 在阿里云开发参考文档 申请并创建API-KEY
- 设置 api_key 环境变量
export DASHSCOPE_API_KEY="YOUR_DASHSCOPE_API_KEY"
支持 LCEL(LangChain Expression Language) 语法
from langchain.prompts import PromptTemplate
from langchain_qianwen import Qwen_v1
if __name__ == "__main__":
jock_template = "给我讲个有关 {topic} 的笑话"
prompt = PromptTemplate.from_template(jock_template)
llm = Qwen_v1(
model_name="qwen-turbo",
temperature=0.18,
streaming=True,
)
chain = prompt | llm
for s in chain.stream({"topic": "产品经理"}):
print(s, end="", flush=True)
支持异步调用 async callback handler
p.s. 目前 llm 模型 (Qwen_v1) 可以使用 AsyncIteratorCallbackHandler, chatmodel(ChatQwen_v1 待更新 这个我还用不到...)
from langchain.callbacks.streaming_aiter import AsyncIteratorCallbackHandler
from langchain_qianwen import Qwen_v1
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
import asyncio
async def use_async_handler(input):
handler = AsyncIteratorCallbackHandler()
llm = Qwen_v1(
model_name="qwen-turbo",
streaming=True,
callbacks=[handler],
)
memory = ConversationBufferMemory()
chain = ConversationChain(
llm=llm,
memory=memory,
verbose=True,
)
asyncio.create_task(chain.apredict(input=input))
return handler.aiter()
async def async_test():
async_gen = await use_async_handler("hello")
async for i in async_gen:
print(i)
if __name__ == "__main__":
asyncio.run(async_test())
chat_models
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_qianwen import ChatQwen_v1
from langchain.schema import (
HumanMessage,
)
if __name__ == "__main__":
chat = ChatQwen_v1(
model_name="qwen-turbo",
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
)
chat([HumanMessage(content="举例说明一下 PHP 为什么是世界上最好的语言")])
使用 agent 增加网络搜索功能
from langchain.agents import load_tools, AgentType, initialize_agent
from langchain_qianwen import Qwen_v1
if __name__ == "__main__":
llm = Qwen_v1(
model_name="qwen-plus",
)
## 需要去 serpapi 官网申请一个 api_key
tool_names = ["serpapi"]
tools = load_tools(tool_names)
agent = initialize_agent(tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)
agent.run("今天北京的天气怎么样?")
使用 embedding 提取文档中的信息
from langchain.embeddings.dashscope import DashScopeEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import RetrievalQA
from langchain.document_loaders import DirectoryLoader
from langchain_qianwen import Qwen_v1
if __name__ == "__main__":
llm = Qwen_v1(
model_name="qwen-turbo",
)
loader = DirectoryLoader("./assets", glob="**/*.txt")
document = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=2048, chunk_overlap=0)
texts = text_splitter.split_documents(document)
embeddings = DashScopeEmbeddings(
model="text-embedding-v1",
)
print(f"text length: {len(texts)}")
# 使用 embedding engion 将 text 转换为向量
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever()
# qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True)
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever)
query = "文章中的工厂模式使用例子有哪些??"
rsp = qa.run({"query": query})
print(rsp)
更多使用请查看 langchain 官方文档 和 examples 目录
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_qianwen-0.1.17.tar.gz.
File metadata
- Download URL: langchain_qianwen-0.1.17.tar.gz
- Upload date:
- Size: 14.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
41826eae54d94d270347a59c8319383b95ee7b5a020a4a9027a6f6b3a89f2c4c
|
|
| MD5 |
d042ecd68fcafd184ef43fa7dd2c38f1
|
|
| BLAKE2b-256 |
e0d3bca261fb21c187f6119666b8dc6b7c227cfd1c20f1257293e847127ed707
|
File details
Details for the file langchain_qianwen-0.1.17-py3-none-any.whl.
File metadata
- Download URL: langchain_qianwen-0.1.17-py3-none-any.whl
- Upload date:
- Size: 15.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
303857599202826aacbe320e2a0bb368d16d1408c69f1bffa7c89e148f2fd8f3
|
|
| MD5 |
4ab2359c4d0deed47c8a8c1e4526d388
|
|
| BLAKE2b-256 |
cdb676736818355ba18d5c8e1e53c6b1d1f66f1422dcad966794a53971944422
|