Default template for PDM package
Project description
Milvus Dataset
Milvus Dataset is a versatile Python library for efficient management and processing of large-scale datasets. While optimized for seamless integration with Milvus vector database, it also serves as a powerful standalone dataset management tool. The library provides a simple yet powerful interface for creating, writing, reading, and managing datasets, particularly excelling in handling large-scale vector data and general-purpose data management tasks.
Key Features
-
Flexible Storage Support
- Local storage support
- Object storage support (S3/MinIO)
- Easy migration between different storage types
-
Rich Data Type Support
- Basic data types (INT64, VARCHAR, etc.)
- Vector data types (FLOAT_VECTOR)
- JSON fields
- Sparse vectors
- Binary vectors
-
Dataset Management
- Training and test set split support
- Dataset metadata management
- Dataset statistics and analytics
- Schema definition and validation
-
Integration Capabilities
- Import to Milvus database
- Upload to Hugging Face Hub
- Seamless pandas DataFrame integration
- Built-in nearest neighbor computation
- Built-in mock data generation
Installation
pip install milvus-dataset
Quick Start Guide
1. Basic Configuration
from milvus_dataset import ConfigManager, StorageType
# Initialize local storage
ConfigManager().init_storage(
root_path="./data/my-dataset",
storage_type=StorageType.LOCAL,
)
# Initialize S3 storage
ConfigManager().init_storage(
root_path="s3://bucket/path",
storage_type=StorageType.S3,
options={
"aws_access_key_id": "your_key",
"aws_secret_access_key": "your_secret",
"endpoint_url": "your_endpoint" # Optional, for MinIO
}
)
2. Creating a Dataset
from pymilvus import CollectionSchema, DataType, FieldSchema
from milvus_dataset import load_dataset
# Define Schema
schema = CollectionSchema(
fields=[
FieldSchema("id", DataType.INT64, is_primary=True),
FieldSchema("text", DataType.VARCHAR, max_length=65535),
FieldSchema("embedding", DataType.FLOAT_VECTOR, dim=1024)
],
description="Text vector dataset"
)
# Load dataset
dataset = load_dataset("my-dataset", schema=schema)
3. Writing Data
import pandas as pd
import numpy as np
# Prepare data
df = pd.DataFrame({
"id": range(1000),
"text": ["text_" + str(i) for i in range(1000)],
"embedding": [np.random.rand(1024) for _ in range(1000)]
})
# Write to training set
with dataset["train"].get_writer(mode="append") as writer:
writer.write(df)
4. Dataset Operations
# View dataset information
print(dataset.summary())
# Compute neighbors
dataset.compute_neighbors(
vector_field_name="embedding",
pk_field_name="id",
top_k=100
)
# import to Milvus
dataset.to_milvus(
milvus_config={
"host": "localhost",
"port": 19530
},
milvus_storage=StorageConfig(
root_path="s3://bucket/path",
storage_type=StorageType.S3,
options={
"aws_access_key_id": "your_key",
"aws_secret_access_key": "your_secret",
"endpoint_url": "your_endpoint" # Optional, for MinIO
}
)
)
# Upload to Hugging Face
dataset.to_hf(repo_name="username/dataset-name")
Advanced Usage
Performance Optimization
-
File Size Configuration
with dataset["train"].get_writer( mode="append", target_file_size_mb=512, # Adjust file size num_buffers=15, # Adjust buffer number queue_size=30 # Adjust queue size ) as writer: writer.write(df)
-
Batch Processing
# Read in batches for batch in dataset["train"].read(mode="batch", batch_size=1000): process_batch(batch)
Storage Migration
# Move data from local to S3
dataset.to_storage(StorageConfig(
storage_type=StorageType.S3,
root_path="s3://bucket/path",
options={...}
))
Common Issues and Solutions
-
Storage Type Selection
- Use local storage for development and testing
- Use object storage for production environments
-
Handling Large-Scale Data
- Use batch writing
- Set appropriate buffer size and queue size
- Consider parallel processing
-
Ensuring Data Quality
- Define comprehensive schema
- Enable schema validation
- Regularly check dataset statistics
-
Performance Optimization Tips
- Set reasonable file size (target_file_size_mb)
- Adjust buffer parameters (num_buffers, queue_size)
- Process data in batches instead of one by one
Contributing
We welcome contributions! Please feel free to submit a Pull Request.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file milvus_dataset-1.0.0.post36.tar.gz.
File metadata
- Download URL: milvus_dataset-1.0.0.post36.tar.gz
- Upload date:
- Size: 44.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: pdm/2.22.3 CPython/3.13.4 Darwin/22.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6e5f69215cf3a615825719bad266af2aa956849f55ad6bef3951b4e6da765971
|
|
| MD5 |
7116b737f473eb8138886e07a7de1fb4
|
|
| BLAKE2b-256 |
3edfad11a35a9a1fb017dc14ea54b453b81aa1e293db33f9d1ad4ec6563deed1
|
File details
Details for the file milvus_dataset-1.0.0.post36-py3-none-any.whl.
File metadata
- Download URL: milvus_dataset-1.0.0.post36-py3-none-any.whl
- Upload date:
- Size: 43.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: pdm/2.22.3 CPython/3.13.4 Darwin/22.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a8d4a1dfa1cf01582f7fad2877dc2defe7ae1a8076d765bcf667f4e9fa8a1e72
|
|
| MD5 |
4de5edda975214358aab0ff6db89e71b
|
|
| BLAKE2b-256 |
92d5099876b89e8d3c39c8b207cba4eb07bec83a7948380d192bf092e435dd3e
|