Compute and organize model cards locally or online.
Project description
AI Card
This SDK contains a collection of methods to create, manage, and edit AI model cards. Cards can be stored either locally or in an online service (that can also be self-hosted). We finally provide methods to populate fields by analyzing datasets by computing well-known measures or calling AI assistants.
Alpha version - Current apis and functionalities are unstable.
⚡ Quickstart
Clone this repository and install it in your virtual environment per:
python -m venv .venv
source .venv/bin/activate
pip install aicard
If you are a developer working on this repository, clone it and install it locally
per pip install -e package instead. Create your first model card like below.
# demo.py
import aicard as aic
card = aic.ModelCard()
card.title = "Model Card"
card.model.name = "Llama"
card.model.overview = "This is a model overview. Freely add <b>html</b> or *markdown*."
card.model.version = "3.2"
card.considerations.use_case = "text generation"
print(card)
> python demo.py
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Model Card ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
completion 🧩🧩⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️
model
name Llama
overview This is a model overview. Freely add html or markdown.
version 3.2
considerations
use case text generation
🧠 Assistants
Perform manual numerical assessment across a wide variety of available tasks holding popular evaluation methodologies and measures. If you need customization, you can create your own tasks too.
aic.evaluation.evaluate(
data={
"boxes": [[[300, 100, 315, 150],[300, 100, 315, 150]]],
"labels": [[0,1]],
"target": ["labels"]
},
pipeline=lambda x:[[x["boxes"][0], x["labels"][0], [0.1,0.9]]], # your model
task=aic.evaluation.tasks.vision.object_detection,
)
# or you can run ts.evaluation.evaluate(...) to obtain a dictionary of metric values
You have the option to collaborate with LLM assistants too! These let you fill in qualitative aspects of the model card from a software's repository. In the simplest case, the assistants will be owned by you. Buf, if you want persistent storage of your cards, you can collaborate with the public or self-hosted service too (see below). The assistant will create a copy of the model card with changes applied.
card = card.assistant(
repository="myproject/", # your model's git repository or working directory here
model=mc.assistants.olama,
dotenv=".env", # assistant configuration file
)
The AI assistant can also create a simplified version of your model card that is friendlier to laypeople to read and parse through.
card = card.gist(
model=aic.assistants.olama,
dotenv=".env" # assistant configuration file
)
📡 Connecting to a server
You can link to a public or self-hosted server for persistent card storage.
Here we will link to the public server, with can be found at URL (to be decided).
To work with a server you need to log in first and create a card.
# the credentials bellow are the default for self-hosted servers
conn = aic.connect("URL").login(username="admin", password="admin")
card = conn.create() # no argument for a brand new card
Alternatively, connect with a dotenv file holding those fields:
# the credentials bellow are the default for self-hosted servers
conn = aic.connect("URL").login(dotenv=".env")
card = conn.create()
You can also search for cards. Cards that you do not own do not come
with any connection attached, and therefore cannot be used as contexts.
To simplify usage, the default argument value owned_only=True retrieves
only the cards owned by you.
for card in conn.search("Model Card", owned_only=True, top=10):
print(card.title) # treat it as a normal card
Submit local card modifications to the server by calling card.commit().
The card keeps track of the connection. If you fail to do this throughout your program,
you will eventually get an assertion error. Call card.detach()
to safely detach a card from a connection without commiting
pending changes (this disables further commits). The best practice is to use the card
as a context when making changes that require a commit, like below:
conn = aic.connect("http://127.0.0.1:5000", username="admin", password="admin")
with conn.create(card) as card:
card.title = "Updated model card name"
Connections can also substitute aic as the environment of assistants.
For safety, you will get an error if you try to call
a connection gist or metric computation without commiting the card first,
as the server operate in its own copy.
card = card.gist(
model=aic.assistants.olama,
dotenv=conn # use the connection as assistant configuration
)
Server load may delay the above snippet, as it blocks until notified by the sever. Finally, the server may not support the assistant, for example if it is a custom one.
🛠️ Self-hosting a server
If you have aicard installed, you can immediately self-host a server.
To do so, create a dictionary of assistants and start a flask service.
This will set up everything the first time, including a database.
If you do not plan to expose the server externally, you can login with
the default administrator credentials, like above.
If you want console instead of persistent logging, skip
the log_file argument.
Below is an example service whose assistant is primarily used for testing:
from aicard.service import serve, TestAssistant
app = serve("/docs", {"tassist": TestAssistant()}, log_file="log.txt")
app.run()
📜 License
TBD
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aicard-0.3.1-py3-none-any.whl.
File metadata
- Download URL: aicard-0.3.1-py3-none-any.whl
- Upload date:
- Size: 59.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f83867947e891a22fad997007fa605d915faf6bbb773a1d5863f366d48b44e9f
|
|
| MD5 |
8e33313a386a982120456da32c9ed4ed
|
|
| BLAKE2b-256 |
a2ad059adb6b3d272d0fa640a8537896cdf75ec00224ca65e1a287d2d9733ea4
|