Get up and running vision foundational models locally.
Project description
Osam
Get up and running vision foundational models locally.
Osam (/oʊˈsɑm/) is a tool to run open source vision foundational models locally, built inspired by Ollama.
Osam gives you:
- Visual foundational models - Segment-Anything Model, Efficient-SAM, etc;
- Local APIs - CLI & Python & HTTP interface;
- Customization - Host custom vision models.
Installation
pip install osam
Usage
CLI
# Run a model with an image
osam run efficient-sam:25m --image examples/_images/dogs.jpg > output.jpg
# Get a JSON output
osam run efficient-sam:25m --image examples/_images/dogs.jpg --json
# {"model": "efficient-sam:25m", "mask": "..."}
# Give a prompt
osam run efficient-sam:25m --image examples/_images/dogs.jpg \
--prompt '{"points": [[1439, 504], [1439, 1289]], "point_labels": [1, 1]}' > output.jpg
Input and output images ('dogs.jpg', 'output.jpg').
Python
import osam.apis
import osam.types
request = osam.types.GenerateRequest(
model="efficient-sam:25m",
image=np.asarray(PIL.Image.open("examples/_images/dogs.jpg")),
prompt=osam.types.Prompt(points=[[1439, 504], [1439, 1289]], point_labels=[1, 1]),
)
response = osam.apis.generate(request=request)
PIL.Image.fromarray(response.mask).save("mask.jpg")
Input and output images ('dogs.jpg', 'mask.jpg').
HTTP
# Get up the server
osam serve
# POST request
curl 127.0.0.1:11368/api/generate -X POST \
-H "Content-Type: application/json" \
-d "{\"model\": \"efficient-sam:25m\", \"image\": \"$(cat examples/_images/dogs.jpg | base64)\"}" \
| jq -r .mask | base64 --decode > mask.jpg
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
osam-0.1.0.tar.gz
(5.5 MB
view hashes)
Built Distribution
osam-0.1.0-py3-none-any.whl
(12.8 kB
view hashes)