Skip to main content

Explainable AI with Large Language Models

Project description

LLaMa LIME

This Python library use the power of large language models to provide intuitive, human-readable explanations for the predictions made by machine learning models.

Features

  • Support for scikit-learn and PyTorch models
  • Integration with OpenAI's language models for explanation generation
  • Works with both classification and regression models
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris

from ai_explainability import Explainer


iris = load_iris()
X, y = iris.data, iris.target

random_forest = RandomForestClassifier()
random_forest.fit(X, y)

# Create an explainer
explainer = Explainer(random_forest, language_model="openai/gpt-4")

explanations = explainer.explain(X)

For more detailed usage, see our Jupyter notebooks in the examples/ directory.

Contributing

We welcome contributions! See our contribution guide for more details.

License

This project is licensed under the terms of the MIT license.

TODO

  • Support for more model types: Currently, Llama-LIME supports scikit-learn models. In the future, we aim to add support for other types of models, such as PyTorch and TensorFlow models.

  • Support for Hugging Face models: In addition to scikit-learn models, we aim to add support for Hugging Face models. This would allow Llama-LIME to generate explanations for a wide range of state-of-the-art natural language processing models.

  • Improved explanation generation: The current explanation generation process is quite basic. We need to further refine this process to generate more detailed and useful explanations.

  • Model inspection capabilities: For more complex models, we might need to add functionality to inspect the internal workings of the model. This could involve using model interpretation techniques like LIME or SHAP.

  • Data preprocessing functionality: We may need to add functionality to preprocess the data before feeding it to the model or the explanation generation system.

  • Postprocessing of explanations: After generating the explanations, we may want to add postprocessing steps to make the explanations more readable or understandable. This could include summarization, highlighting, or conversion to other formats.

  • Testing: We need to add comprehensive testing to ensure the reliability and robustness of Llama-LIME.

  • Documentation: While we have made a start on documentation, we need to continue to expand and improve it.

  • Examples and tutorials: We should create more example notebooks and tutorials demonstrating how to use Llama-LIME with different types of data and models.

  • Community engagement: As an open-source project, we want to encourage community involvement. We need to continue improving our contribution guidelines and fostering an inclusive and welcoming community.

  • feature naming --> can we use feature name to guide description

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_lime-0.1.16.tar.gz (9.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_lime-0.1.16-py3-none-any.whl (10.1 kB view details)

Uploaded Python 3

File details

Details for the file llama_lime-0.1.16.tar.gz.

File metadata

  • Download URL: llama_lime-0.1.16.tar.gz
  • Upload date:
  • Size: 9.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.9 Darwin/22.6.0

File hashes

Hashes for llama_lime-0.1.16.tar.gz
Algorithm Hash digest
SHA256 d9110072c1ac0973349ab44ea05b76798c73d89c36948f796b46b49174495f20
MD5 cdc7081fa374da6bbd1bf5526f356c5e
BLAKE2b-256 c9e1e9969365a58deaa9a18f0a35fd5f0ed9b438159266f21c062fad82f7e0b9

See more details on using hashes here.

File details

Details for the file llama_lime-0.1.16-py3-none-any.whl.

File metadata

  • Download URL: llama_lime-0.1.16-py3-none-any.whl
  • Upload date:
  • Size: 10.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.9 Darwin/22.6.0

File hashes

Hashes for llama_lime-0.1.16-py3-none-any.whl
Algorithm Hash digest
SHA256 c44f3e810fce73345e41a2fba42e0be8e0c7798d946e207bcba0a3321def6c5d
MD5 1b4e674dd64cf2bf68fb9bb702939640
BLAKE2b-256 1a102dace4cdac8ce3502cecb9bf399835886b316f9784f500be93e5c018491f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page