Neural search for humans
Project description
Marqo
Neural search for humans.
A deep-learning powered, open-source search engine which seamlessly integrates with your applications, websites, and workflow.
Getting started
- Marqo requires docker. To install docker go to https://docs.docker.com/get-docker/
- Use docker to run Marqo:
docker run --name marqo --privileged -p 8882:8882 --add-host host.docker.internal:host-gateway marqoai/marqo:0.0.1
- Install the Marqo client:
pip install marqo
- Start indexing and searching! Let's look at a simple example below:
import marqo
mq = marqo.Client(url='http://localhost:8882', main_user="admin", main_password="admin")
mq.index("my-first-index").add_documents([
{
"Title": "The Travels of Marco Polo",
"Description": "A 13th-century travelogue describing Polo's travels"
},
{
"Title": "Extravehicular Mobility Unit (EMU)",
"Description": "The EMU is a spacesuit that provides environmental protection, "
"mobility, life support, and communications for astronauts",
"_id": "article_591"
}]
)
results = mq.index("my-first-index").search(
q="What is the best outfit to wear on the moon?"
)
mq
is the client that wraps themarqo
APIadd_documents()
takes a list of documents, represented as python dicts, for indexingadd_documents()
creates an index with default settings, if one does not already exist- You can optionally set a document's ID with the special
_id
field. Otherwise, marqo will generate one. - If the index doesn't exist, Marqo will create it. If it exists then Marqo will add the documents to the index.
Let's have a look at the results:
# let's print out the results:
import pprint
pprint.pprint(results)
{
'hits': [
{
'Title': 'Extravehicular Mobility Unit (EMU)',
'Description': 'The EMU is a spacesuit that provides environmental protection, mobility, life support, and'
'communications for astronauts',
'_highlights': {
'Description': 'The EMU is a spacesuit that provides environmental protection, '
'mobility, life support, and communications for astronauts'
},
'_id': 'article_591',
'_score': 0.61938936
},
{
'Title': 'The Travels of Marco Polo',
'Description': "A 13th-century travelogue describing Polo's travels",
'_highlights': {'Title': 'The Travels of Marco Polo'},
'_id': 'e00d1a8d-894c-41a1-8e3b-d8b2a8fce12a',
'_score': 0.60237324
}
],
'limit': 10,
'processingTimeMs': 49,
'query': 'What is the best outfit to wear on the moon?'
}
- Each hit corresponds to a document that matched the search query
- They are ordered from most to least matching
limit
is the maximum number of hits to be returned. This can be set as a parameter during search- Each hit has a
_highlights
field. This was the part of the document that matched the query the best
Other basic operations
Get document
Retrieve a document by ID.
result = mq.index("my-first-index").get_document(document_id="article_591")
Note that by adding the document using add_documents
again using the same _id
will cause a document to be updated.
Get index stats
Get information about an index.
results = mq.index("my-first-index").get_stats()
Lexical search
Perform a keyword search.
result = mq.index("my-first-index").search('marco polo', search_method=marqo.SearchMethods.LEXICAL)
Search specific fields
Using the default neural search method
result = mq.index("my-first-index").search('adventure', searchable_attributes=['Title'])
Multi modal and cross modal search
To power image and text search, Marqo allows users to plug and play with CLIP models from HuggingFace. Note that if you do not configure multi modal search, image urls will be treated as strings. To start indexing and searching with images, first create an index with a CLIP configuration, as below:
settings = {
"treat_urls_and_pointers_as_images":True, # allows us to find an image file and index it
"model":"ViT-B/32"
}
response = mq.create_index("my-multimodal-index", **settings)
Images can then be added within documents as follows. You can use urls from the internet (for example S3) or from the disk of the machine:
response = mq.index("my-multimodal-index").add_documents([{
"My Image": "https://upload.wikimedia.org/wikipedia/commons/thumb/f/f2/Portrait_Hippopotamus_in_the_water.jpg/440px-Portrait_Hippopotamus_in_the_water.jpg",
"Description": "The hippopotamus, also called the common hippopotamus or river hippopotamus, is a large semiaquatic mammal native to sub-Saharan Africa",
"_id": "hippo-facts"
}])
You can then search using text as usual. Both text and image fields will be searched:
results = mq.index("my-multimodal-index").search('animal')
Setting searchable_attributes
to the image field ['My Image']
ensures only images are searched in this index:
results = mq.index("my-multimodal-index").search('animal', searchable_attributes=['My Image'])
Searching using an image
Searching using an image can be achieved by providing the image link.
results = mq.index("my-multimodal-index").search('https://upload.wikimedia.org/wikipedia/commons/thumb/9/96/Standing_Hippopotamus_MET_DP248993.jpg/440px-Standing_Hippopotamus_MET_DP248993.jpg')
Delete index
Delete an index.
results = mq.index("my-first-index").delete()
Delete documents
Delete documents.
results = mq.index("my-first-index").delete_documents(ids=["article_591", "article_602"])
A note when using a GPU
Depending on the class of GPU, a version of PyTorch compiled with the latest CUDA (>11.3) may be required. If for example, an error appears similar to the following;
NVIDIA #### with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA #### GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
then PyTorch with the appropriate CUDA should be installed. For example, to install PyTorch 1.12 with CUDA 11.6 do the following;
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116 --upgrade
It should be noted that the CUDA version the current driver supports can be obtained by using the following command in the terminal;
$nvidia-smi
The respective PyTorch installation should have a CUDA version that does not exceed this. PyTorch installation instrucitons can be found here https://pytorch.org/get-started/locally/ and previous versions with other CUDA options can be found at https://pytorch.org/get-started/previous-versions/.
Warning
Note that you should not run other applications on Marqo's Opensearch cluster as Marqo automatically changes and adapts the settings on the cluster.
Contributors
Marqo is a community project with the goal of making neural search accessible to the wider developer community. We are glad that you are interested in helping out! Please read this to get started
Dev set up
- Create a virtual env
python -m venv ./venv
- Activate the virtual environment
source ./venv/bin/activate
- Install requirements from the requirements file:
pip install -r requirements.txt
- Run tests by running the tox file. CD into this dir and then run "tox"
- If you update dependencies, make sure to delete the .tox dir and rerun
Merge instructions:
- Run the full test suite (by using the command
tox
in this dir). - Create a pull request with an attached github issue.
The large data test will build Marqo from the main branch and fill indices with data. Go through and test queries against this data. https://github.com/S2Search/NeuralSearchLargeDataTest
Support
- Join our Slack community and chat with other community members about ideas.
- Marqo community meetings (coming soon!)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file marqo-0.1.12.tar.gz
.
File metadata
- Download URL: marqo-0.1.12.tar.gz
- Upload date:
- Size: 19.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.4.2 requests/2.22.0 setuptools/45.2.0 requests-toolbelt/0.8.0 tqdm/4.30.0 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0b82090f21dfe0a4c552164a25a48d079774ae4550866816d5c933a2a1cdfa40 |
|
MD5 | 1137db139417947a634626406353aa67 |
|
BLAKE2b-256 | 418d280925e96c2918f0574b753c8722a0c19828259633410fffdb9fda2b4208 |