llama-index llms mistral ai integration
Project description
LlamaIndex Llms Integration: Mistral
Installation
Install the required packages using the following commands:
%pip install llama-index-llms-mistralai
!pip install llama-index
Basic Usage
Initialize the MistralAI Model
To use the MistralAI model, create an instance and provide your API key:
from llama_index.llms.mistralai import MistralAI
llm = MistralAI(api_key="<replace-with-your-key>")
Generate Completions
To generate a text completion for a prompt, use the complete
method:
resp = llm.complete("Paul Graham is ")
print(resp)
Chat with the Model
You can also chat with the model using a list of messages. Here’s an example:
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(role="system", content="You are CEO of MistralAI."),
ChatMessage(role="user", content="Tell me the story about La plateforme"),
]
resp = MistralAI().chat(messages)
print(resp)
Using Random Seed
To set a random seed for reproducibility, initialize the model with the random_seed
parameter:
resp = MistralAI(random_seed=42).chat(messages)
print(resp)
Streaming Responses
Stream Completions
You can stream responses using the stream_complete
method:
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
Stream Chat Responses
To stream chat messages, use the following code:
messages = [
ChatMessage(role="system", content="You are CEO of MistralAI."),
ChatMessage(role="user", content="Tell me the story about La plateforme"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
Configure Model
To use a specific model configuration, initialize the model with the desired model name:
llm = MistralAI(model="mistral-medium")
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
Function Calling
You can call functions from the model by defining tools. Here’s an example:
from llama_index.llms.mistralai import MistralAI
from llama_index.core.tools import FunctionTool
def multiply(a: int, b: int) -> int:
"""Multiply two integers and return the result."""
return a * b
def mystery(a: int, b: int) -> int:
"""Mystery function on two integers."""
return a * b + a + b
mystery_tool = FunctionTool.from_defaults(fn=mystery)
multiply_tool = FunctionTool.from_defaults(fn=multiply)
llm = MistralAI(model="mistral-large-latest")
response = llm.predict_and_call(
[mystery_tool, multiply_tool],
user_msg="What happens if I run the mystery function on 5 and 7",
)
print(str(response))
LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/mistralai/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for llama_index_llms_mistralai-0.2.7.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6a84d17e79959ade460104cbc9d952dfd6a4fb8b8aa2c7151c6cd6c26245d636 |
|
MD5 | 04661ff54d40bb607b81853081ddf871 |
|
BLAKE2b-256 | 6751ff7f5e118ce75e89f80e7ebf1f82276cf07febbde54f87294a2ae8b43cf1 |
Hashes for llama_index_llms_mistralai-0.2.7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1ded64a0559485bb6f5f5364ee0e6cb058b736b7d74a78c65af1907105bcad98 |
|
MD5 | 52ab3d59947f5ea4aede94682b7ba85f |
|
BLAKE2b-256 | e010c058c5d990b58804f1ea30cf9c10343e7bb081dece96572ec50b85eb2d57 |