Clarifai Data Utils
Project description
Clarifai Python Data Utils
This is a collection of utilities for handling various types of multimedia data. Enhance your experience by seamlessly integrating these utilities with the Clarifai Python SDK. This powerful combination empowers you to address both visual and textual use cases effortlessly through the capabilities of Artificial Intelligence. Unlock new possibilities and elevate your projects with the synergy of versatile data utilities and the robust features offered by the Clarifai Python SDK. Explore the fusion of these tools to amplify the intelligence in your applications! 🌐🚀
Website | Schedule Demo | Signup for a Free Account | API Docs | Clarifai Community | Python SDK Docs | Examples | Colab Notebooks | Discord
Table Of Contents
Installation
Install from PyPi:
pip install clarifai-datautils
Install from Source:
git clone https://github.com/Clarifai/clarifai-python-datautils
cd clarifai-python-datautils
python3 -m venv env
source env/bin/activate
pip3 install -r requirements.txt
Getting started
Quick intro to Image Annotation Conversion feature
from clarifai_datautils import ImageAnnotations
annotated_dataset = ImageAnnotations.import_from(path= 'folder_path', format= 'annotation_format')
Features
Image Utils
-
Annotation Loader
- Load various annotated image datasets and export to clarifai Platform
- Convert from one annotation format to other supported annotation formats
Data Ingestion Pipeline
- Easy to use pipelines to load data from files and ingest into clarifai platfrom.
- Load text files(pdf, doc, etc..) , transform, chunk and upload to the Clarifai Platform
Usage
Image Annotation Loader
from clarifai_datautils import ImageAnnotations
#import from folder
coco_dataset = ImageAnnotations.import_from(path='folder_path',format= 'coco_detection')
#Using clarifai SDK to upload to Clarifai Platform
#export CLARIFAI_PAT={your personal access token} # set PAT as env variable
from clarifai.client.dataset import Dataset
dataset = Dataset(user_id="user_id", app_id="app_id", dataset_id="dataset_id")
dataset.upload_dataset(dataloader=coco_dataset.dataloader)
#info about loaded dataset
coco_dataset.get_info()
#exporting to other formats
coco_dataset.export_to('voc_detection')
Data Ingestion Pipelines
Setup
To use Data Ingestion Pipeline, please run
pip install -r requirements-dev.txt
from clarifai_datautils.text import Pipeline, PDFPartition
from clarifai_datautils.text.pipeline.cleaners import Clean_extra_whitespace
# Define the pipeline
pipeline = Pipeline(
name='pipeline-1',
transformations=[
PDFPartition(chunking_strategy = "by_title",max_characters = 1024),
Clean_extra_whitespace()
]
)
# Using SDK to upload
from clarifai.client import Dataset
dataset = Dataset(dataset_url)
dataset.upload_dataset(pipeline.run(files = file_path, loader = True))
More Examples
See many more code examples in this repo.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for clarifai_datautils-0.0.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 74fb565444f21d8f4263cf8bed3053b2c9b16a7a4dc559e34428d7d97a048640 |
|
MD5 | f2fbb3a57cec787600e9c321ca28c3b9 |
|
BLAKE2b-256 | 9134570d98317cdb8143448489b6df5b17cf6ac234db360a6a1b3b66dbbc65d7 |