Helper library for interacting with Landing AI LandingLens
Project description
LandingLens Python SDK
The LandingLens Python SDK contains the LandingLens development library and examples that show how to integrate your app with LandingLens in a variety of scenarios. The examples cover different model types, image acquisition sources, and post-procesing techniques.
We've provided some examples in Jupyter Notebooks to focus on ease of use, and some examples in Python apps to provide a more robust and complete experience.
Example | Description | Type |
---|---|---|
Poker Card Suit Identification | This notebook shows how to use an object detection model from LandingLens to detect suits on playing cards. A webcam is used to take photos of playing cards. | Jupyter Notebook |
Door Monitoring for Home Automation | This notebook shows how to use an object detection model from LandingLens to detect whether a door is open or closed. An RTSP camera is used to acquire images. | Jupyter Notebook |
Satellite Images and Post-Processing | This notebook shows how to use a Visual Prompting model from LandingLens to identify different objects in satellite images. The notebook includes post-processing scripts that calculate the percentage of ground cover that each object takes up. | Jupyter Notebook |
License Plate Detection and Recognition | This notebook shows how to extract frames from a video file and use a object detection model and OCR from LandingLens to identify and recognize different license plates. | Jupyter Notebook |
Streaming Video | This application shows how to continuously run inference on images extracted from a streaming RTSP video camera feed. | Python application |
Documentation
- Landing AI Python Library Quick Start Guide
- Landing AI Python Library API Reference
- Landing AI Python Library Changelog
- Landing AI Support Center
- LandingLens Walk-Through Video
Install the Library
pip install landingai
Quick Start
Prerequisites
This library needs to communicate with the LandingLens platform to perform certain functions. For example, the Predictor
API calls the HTTP endpoint of your deployed model. To enable communication with LandingLens, you will need the following information:
- The Endpoint ID of your deployed model in LandingLens. You can find this on the Deploy page in LandingLens.
- The API Key for the LandingLens organization that has the model you want to deploy. To learn how to generate these credentials, go here.
Run Inference
Run inference using the endpoint you created in LandingLens:
- Install the Python library.
- Create a
Predictor
class with your Endpoint ID and API Key. - Load your image into a PIL Image (below the image is "image.png")
- Call the
predict()
function with an Image.
from PIL import Image
from landingai.predict import Predictor
# Enter your API Key and Secret
endpoint_id = "FILL_YOUR_INFERENCE_ENDPOINT_ID"
api_key = "FILL_YOUR_API_KEY"
# Load your image
image = Image.open("image.png")
# Run inference
predictor = Predictor(endpoint_id, api_key=api_key)
predictions = predictor.predict(image)
See an end to end working example here.
Visualize and Save Predictions
Visualize your inference results by overlaying the predictions on the input image and saving the updated image:
from landingai.visualize import overlay_predictions
# continue the above example
predictions = predictor.predict(image)
image_with_preds = overlay_predictions(predictions, image)
image_with_preds.save("image.jpg")
Create a Vision Pipeline
All the modules shown above and others can be chained together using the landingai.pipeline
abstraction. At its core, a pipeline is a sequence of chained calls that operate on a landingai.pipeline.Frame
.
The following example shows how the previous sections come together on a pipeline. For more details, go to the Vision Pipelines User Guide
from landingai.predict import Predictor
import landingai.pipeline as pl
cloud_sky_model = Predictor("FILL_YOUR_INFERENCE_ENDPOINT_ID"
, api_key="FILL_YOUR_API_KEY")
Camera = pl.image_source.NetworkedCamera(stream_url)
for frame in Camera:
(
frame.downsize(width=1024)
.run_predict(predictor=cloud_sky_model)
.overlay_predictions()
.show_image()
.save_image(filename_prefix="./capture")
)
Run Examples Locally
All the examples in this repo can be run locally.
To give you some guidance, here's how you can run the rtsp-capture
example locally in a shell environment:
- Clone the repo to local:
git clone https://github.com/landing-ai/landingai-python.git
- Install the library:
poetry install --with examples
(See the poetry docs for how to installpoetry
) - Activate the virtual environment:
poetry shell
- Run:
python landingai-python/examples/capture-service/run.py
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file landingai-0.3.13.tar.gz
.
File metadata
- Download URL: landingai-0.3.13.tar.gz
- Upload date:
- Size: 1.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.2 CPython/3.10.11 Linux/6.2.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 032be828625df63add4fe3efc057de2c44e52aedaec9da0b14463ceb3d0cf1e6 |
|
MD5 | 9ec125d02952643887352e59ca15f900 |
|
BLAKE2b-256 | 656ade66991730681193bd3bb85b4ee2c43a46e7feaa99e8a963d75aa3935b9e |
File details
Details for the file landingai-0.3.13-py3-none-any.whl
.
File metadata
- Download URL: landingai-0.3.13-py3-none-any.whl
- Upload date:
- Size: 1.0 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.2 CPython/3.10.11 Linux/6.2.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 90a0f07fc0ecd9097b33320a5cc225289da887c3a222ee47b26fcf2f06fb64c7 |
|
MD5 | 15071c3987b9364de2cb7fc0a6abc860 |
|
BLAKE2b-256 | 0d2139a1673acfa4441b00e27c5582e2fba1d5c017ecc967bac7dbd028245c7a |