Skip to main content

Python Sdk for Milvus

Project description

Milvus Python SDK

version Downloads Downloads Downloads license

Python SDK for Milvus. To contribute code to this project, please read our contribution guidelines first.

For detailed SDK documentation, refer to API Documentation.

Get started

Prerequisites

pymilvus only supports Python 3.6 or higher.

Install pymilvus

You can install pymilvus via pip or pip3 for Python3:

$ pip3 install pymilvus

The following collection shows Milvus versions and recommended pymilvus versions:

Milvus version Recommended pymilvus version
0.3.0 0.1.13
0.3.1 0.1.25
0.4.0 0.2.2
0.5.0 0.2.3
0.5.1 0.2.3
0.5.2 0.2.3
0.5.3 0.2.5
0.6.0 0.2.6, 0.2.7
0.7.0 0.2.8
0.7.1 0.2.9
0.8.0 0.2.10
0.9.0 0.2.11
0.9.1 0.2.12
0.10.0 0.2.13
0.10.1 - 0.10.4 0.2.14
0.10.5 0.2.15, 0.4.0

You can install a specific version of pymilvus by:

$ pip install pymilvus==0.4.0

You can upgrade pymilvus to the latest version by:

$ pip install --upgrade pymilvus

Examples

Refer to examples for more example programs.

Basic operations

Connect to the Milvus server

  1. Import pymilvus.

    # Import pymilvus
    >>> from milvus import Milvus, IndexType, MetricType, Status
    
  2. Create a client to Milvus server by using one of the following methods:

    # Connect to Milvus server
    >>> client = Milvus(host='localhost', port='19530')
    

    Note: In the above code, default values are used for host and port parameters. Feel free to change them to the IP address and port you set for Milvus server.

    >>> client = Milvus(uri='tcp://localhost:19530')
    

Create/Drop collections

Create a collection

  1. Prepare collection parameters.

    # Prepare collection parameters
    >>> param = {'collection_name':'test01', 'dimension':128, 'index_file_size':1024, 'metric_type':MetricType.L2}
    
  2. Create collection test01 with dimension size as 128, size of the data file for Milvus to automatically create indexes as 1024, and metric type as Euclidean distance (L2).

    # Create a collection
    >>> status = client.create_collection(param)
    >>> status
    Status(code=0, message='Create collection successfully!')
    

Drop a collection

# Drop collection
>>> status = client.drop_collection(collection_name='test01')
>>> status
Status(code=0, message='Delete collection successfully!')

Create/Drop partitions in a collection

Create a partition

You can split collections into partitions by partition tags for improved search performance. Each partition is also a collection.

# Create partition
>>> status = client.create_partition(collection_name='test01', partition_tag='tag01')
>>> status
Status(code=0, message='OK')

Use list_partitions() to verify whether the partition is created.

# Show partitions
>>> status, partitions = client.list_partitions(collection_name='test01')
>>> partitions
[(collection_name='test01', tag='_default'), (collection_name='test01', tag='tag01')]

Drop a partition

>>> status = client.drop_partition(collection_name='test01', partition_tag='tag01')
Status(code=0, message='OK')

Create/Drop indexes in a collection

Create an index

Note: In production, it is recommended to create indexes before inserting vectors into the collection. Index is automatically built when vectors are being imported. However, you need to create the same index again after the vector insertion process is completed because some data files may not meet the index_file_size and index will not be automatically built for these data files.

  1. Prepare index parameters. The following command uses IVF_FLAT index type as an example.

    # Prepare index param
    >>> ivf_param = {'nlist': 4096}
    
  2. Create an index for the collection.

    # Create index
    >>> status = client.create_index('test01', IndexType.IVF_FLAT, ivf_param)
    Status(code=0, message='Build index successfully!')
    

Drop an index

>>> status = client.drop_index('test01')
Status(code=0, message='OK')

Insert/Delete vectors in collections/partitions

Insert vectors in a collection

  1. Generate 20 vectors of 128 dimension.

    >>> import random
    >>> dim = 128
    # Generate 20 vectors of 128 dimension
    >>> vectors = [[random.random() for _ in range(dim)] for _ in range(20)]
    
  2. Insert the list of vectors. If you do not specify vector ids, Milvus automatically generates IDs for the vectors.

    # Insert vectors
    >>> status, inserted_vector_ids = client.insert(collection_name='test01', records=vectors)
    >>> inserted_vector_ids 
    [1592028661511657000, 1592028661511657001, 1592028661511657002, 1592028661511657003, 1592028661511657004, 1592028661511657005, 1592028661511657006, 1592028661511657007, 1592028661511657008, 1592028661511657009, 1592028661511657010, 1592028661511657011, 1592028661511657012, 1592028661511657013, 1592028661511657014, 1592028661511657015, 1592028661511657016, 1592028661511657017, 1592028661511657018, 1592028661511657019]
    

    Alternatively, you can also provide user-defined vector ids:

    >>> vector_ids = [id for id in range(20)]
    >>> status, inserted_vector_ids = client.insert(collection_name='test01', records=vectors, ids=vector_ids)
    

Insert vectors in a partition

>>> status, inserted_vector_ids = client.insert('test01', vectors, partition_tag="tag01")

To verify the vectors you have inserted, use get_vector_by_id(). Assume you have vector with the following ID.

>>> status, vector = client.get_entity_by_id(collection_name='test01', ids=inserted_vector_ids[:10])

Delete vectors by ID

You can delete these vectors by:

>>> status = client.delete_entity_by_id('test01', inserted_vector_ids[:10])
>>> status
Status(code=0, message='OK')

Flush data in one or multiple collections to disk

When performing operations related to data changes, you can flush the data from memory to disk to avoid possible data loss. Milvus also supports automatic flushing, which runs at a fixed interval to flush the data in all collections to disk. You can use the Milvus server configuration file to set the interval.

>>> status = client.flush(['test01'])
>>> status
Status(code=0, message='OK')

Compact all segments in a collection

A segment is a data file that Milvus automatically creates by merging inserted vector data. A collection can contain multiple segments. If some vectors are deleted from a segment, the space taken by the deleted vectors cannot be released automatically. You can compact segments in a collection to release space.

>>> status = client.compact(collection_name='test01')
>>> status
Status(code=0, message='OK')

Search vectors in collections/partitions

Search vectors in a collection

  1. Prepare search parameters.
>>> search_param = {'nprobe': 16}
  1. Search vectors.
# create 5 vectors of 32-dimension
>>> q_records = [[random.random() for _ in range(dim)] for _ in range(5)]
# search vectors
>>> status, results = client.search(collection_name='test01', query_records=q_records, top_k=2, params=search_param)
>>> results
[
[(id:1592028661511657012, distance:19.450458526611328), (id:1592028661511657017, distance:20.13418197631836)],
[(id:1592028661511657012, distance:19.12230682373047), (id:1592028661511657018, distance:20.221458435058594)],
[(id:1592028661511657014, distance:20.423980712890625), (id:1592028661511657016, distance:20.984281539916992)],
[(id:1592028661511657018, distance:18.37057876586914), (id:1592028661511657019, distance:19.366962432861328)],
[(id:1592028661511657013, distance:19.522361755371094), (id:1592028661511657010, distance:20.304216384887695)]
]

Search vectors in a partition

# create 5 vectors of 32-dimension
>>> q_records = [[random.random() for _ in range(dim)] for _ in range(5)]
>>> client.search(collection_name='test01', query_records=q_records, top_k=1, partition_tags=['tag01'], params=search_param)

Note: If you do not specify partition_tags, Milvus searches the whole collection.

close client

>>> client.close()

FAQ

I'm getting random "socket operation on non-socket" errors from gRPC when connecting to Milvus from an application served on Gunicorn

Make sure to set the environment variable GRPC_ENABLE_FORK_SUPPORT=1. For reference, see https://zhuanlan.zhihu.com/p/136619485

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pymilvus-0.4.0.tar.gz (51.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pymilvus-0.4.0-py3-none-any.whl (60.3 kB view details)

Uploaded Python 3

File details

Details for the file pymilvus-0.4.0.tar.gz.

File metadata

  • Download URL: pymilvus-0.4.0.tar.gz
  • Upload date:
  • Size: 51.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/51.3.3 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.6.9

File hashes

Hashes for pymilvus-0.4.0.tar.gz
Algorithm Hash digest
SHA256 38e6ca5ae84b56462f498494afa1b7753849025ca2f0477b2825aa45ae9b9ef2
MD5 be12c1a8308e382eaf2bb32176f9309c
BLAKE2b-256 8927ebcad03d346d38ef3224484ee2aa148a9051c4d4ecdebac2a92f6b98078e

See more details on using hashes here.

File details

Details for the file pymilvus-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: pymilvus-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 60.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/51.3.3 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.6.9

File hashes

Hashes for pymilvus-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1dc563b7a861e9fb12e52277ae4dcdf3b7e45dd74fbabde108b0f3eb4b3b02d1
MD5 fafdf64cab17c12bf1317db917ccf261
BLAKE2b-256 ca1618097103226209539a9af15dca9e50927f9b39e02d76d862ea07db2366de

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page