Skip to main content

Explainable AI with Large Language Models

Project description

LLaMa LIME

This Python library use the power of large language models to provide intuitive, human-readable explanations for the predictions made by machine learning models.

Features

  • Support for scikit-learn and PyTorch models
  • Integration with OpenAI's language models for explanation generation
  • Works with both classification and regression models
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris

from ai_explainability import Explainer


iris = load_iris()
X, y = iris.data, iris.target

random_forest = RandomForestClassifier()
random_forest.fit(X, y)

# Create an explainer
explainer = Explainer(random_forest, language_model="openai/gpt-4")

explanations = explainer.explain(X)

For more detailed usage, see our Jupyter notebooks in the examples/ directory.

Contributing

We welcome contributions! See our contribution guide for more details.

License

This project is licensed under the terms of the MIT license.

TODO

  • Support for more model types: Currently, Llama-LIME supports scikit-learn models. In the future, we aim to add support for other types of models, such as PyTorch and TensorFlow models.
  • Support for Hugging Face models: In addition to scikit-learn models, we aim to add support for Hugging Face models. This would allow Llama-LIME to generate explanations for a wide range of state-of-the-art natural language processing models.
  • Improved explanation generation: The current explanation generation process is quite basic. We need to further refine this process to generate more detailed and useful explanations.
  • Model inspection capabilities: For more complex models, we might need to add functionality to inspect the internal workings of the model. This could involve using model interpretation techniques like LIME or SHAP.
  • Data preprocessing functionality: We may need to add functionality to preprocess the data before feeding it to the model or the explanation generation system.
  • Postprocessing of explanations: After generating the explanations, we may want to add postprocessing steps to make the explanations more readable or understandable. This could include summarization, highlighting, or conversion to other formats.
  • Testing: We need to add comprehensive testing to ensure the reliability and robustness of Llama-LIME.
  • Documentation: While we have made a start on documentation, we need to continue to expand and improve it.
  • Examples and tutorials: We should create more example notebooks and tutorials demonstrating how to use Llama-LIME with different types of data and models.
  • Community engagement: As an open-source project, we want to encourage community involvement. We need to continue improving our contribution guidelines and fostering an inclusive and welcoming community.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_lime-0.1.4.tar.gz (6.7 kB view hashes)

Uploaded Source

Built Distribution

llama_lime-0.1.4-py3-none-any.whl (7.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page