Skip to main content

Client SDK for InvertedAI

Project description

Documentation Status PyPI python-badge ci-badge Open In Colab

InvertedAI

Overview

Inverted AI provides an API for controlling non-playable characters (NPCs) in autonomous driving simulations, available as either a REST API or a Python SDK, (and C++ SDK) built on top of it. Using the API requires an access key - create an account on our user portal to get one. New users are given keys preloaded with an API access budget; researcher users affiliated to academic institutions generally receive a sufficient amount of credits to conduct their research for free. This page describes how to get started quickly. For more in-depth understanding, see the API usage guide, and detailed documentation for the REST API, the Python SDK, and the C++ SDK. To understand the underlying technology and why it's necessary for autonomous driving simulations, visit the Inverted AI website.

Getting started

Installation

For installing the Python package from PyPI:

pip install --upgrade invertedai

The Python client SDK is open source, so you can also download it and build locally.

To make calls through the Inverted AI API end points, an API key must be obtained and set (please go to our website to sign up and receive your API key).

To set this API key in the python SDK, there are 2 methods. The first method is to explicitly set the API key string within a python script using the below function:

iai.add_apikey('<INSERT_KEY_HERE>')

The second method is to set the following environment variable with your API key string via the appropriate method according to your relevant operating system:

export IAI_API_KEY="<INSERT_KEY_HERE>"

To set the API key in the C++ SDK, please review the executables in the examples folder.

Minimal example

import invertedai as iai
from invertedai.utils import get_default_agent_properties
from invertedai.common import AgentType

import matplotlib.pyplot as plt
import os

location = "canada:drake_street_and_pacific_blvd"  # select one of available locations

api_key = os.environ.get("IAI_API_KEY", None)
if api_key is None:
    iai.add_apikey('<INSERT_KEY_HERE>')  # specify your key here or through the IAI_API_KEY variable

print("Begin initialization.")
# get static information about a given location including map in osm
# format and list traffic lights with their IDs and locations.
location_info_response = iai.location_info(location=location)

# initialize the simulation by spawning NPCs
response = iai.initialize(
    location=location,  # select one of available locations
    agent_properties=get_default_agent_properties({AgentType.car:10}),  # number of NPCs to spawn
)
agent_properties = response.agent_properties  # get dimension and other attributes of NPCs

rendered_static_map = location_info_response.birdview_image.decode()
scene_plotter = iai.utils.ScenePlotter(
    rendered_static_map,
    location_info_response.map_fov,
    (location_info_response.map_center.x, location_info_response.map_center.y),
    location_info_response.static_actors
)
scene_plotter.initialize_recording(
    agent_states=response.agent_states,
    agent_properties=agent_properties,
)

print("Begin stepping through simulation.")
for _ in range(100):  # how many simulation steps to execute (10 steps is 1 second)

    # query the API for subsequent NPC predictions
    response = iai.drive(
        location=location,
        agent_properties=agent_properties,
        agent_states=response.agent_states,
        recurrent_states=response.recurrent_states,
        light_recurrent_states=response.light_recurrent_states,
    )

    # save the visualization
    scene_plotter.record_step(response.agent_states,response.traffic_lights_states)

print("Simulation finished, save visualization.")
# save the visualization to disk
fig, ax = plt.subplots(constrained_layout=True, figsize=(50, 50))
gif_name = 'minimal_example.gif'
scene_plotter.animate_scene(
    output_name=gif_name,
    ax=ax,
    direction_vec=False,
    velocity_vec=False,
    plot_frame_number=True
)
print("Done")

Stateful Cosimulation

Conceptually, the API is used to establish synchronous co-simulation between your own simulator running locally on your machine and the NPC engine running on Inverted AI servers. The basic integration in Python looks like this.

import invertedai as iai
from invertedai.common import AgentType
from invertedai import get_regions_default
from invertedai.utils import get_default_agent_properties

import numpy as np
import matplotlib.pyplot as plt

from typing import List

iai.add_apikey('')  # Specify your key here or through the IAI_API_KEY variable

print("Begin initialization.")
LOCATION = "canada:drake_street_and_pacific_blvd"

NUM_EGO_AGENTS = 1
NUM_NPC_AGENTS = 10
NUM_TIME_STEPS = 100

##########################################################################################################
# INSERT YOUR OWN EGO PREDICTIONS FOR THE INITIALIZATION
ego_response = iai.initialize(
    location = LOCATION,
    agent_properties = get_default_agent_properties({AgentType.car:NUM_EGO_AGENTS}),
)
ego_agent_properties = ego_response.agent_properties  # get dimension and other attributes of NPCs
##########################################################################################################

# Generate the region objects for large_initialization
regions = get_regions_default(
    location = LOCATION,
    agent_count_dict = {AgentType.car: NUM_NPC_AGENTS}
)
# Instantiate a stateful wrapper for Inverted AI API
iai_simulation = iai.BasicCosimulation(  
    location = LOCATION,
    ego_agent_properties = ego_agent_properties,
    ego_agent_agent_states = ego_response.agent_states,
    regions = regions,
    traffic_light_state_history = [ego_response.traffic_lights_states]
)

# Initialize the ScenePlotter for scene visualization
location_info_response = iai.location_info(location=LOCATION)
rendered_static_map = location_info_response.birdview_image.decode()
scene_plotter = iai.utils.ScenePlotter(
    rendered_static_map,
    location_info_response.map_fov,
    (location_info_response.map_center.x, location_info_response.map_center.y),
    location_info_response.static_actors
)
scene_plotter.initialize_recording(
    agent_states = iai_simulation.agent_states,
    agent_properties = iai_simulation.agent_properties,
    conditional_agents = list(range(NUM_EGO_AGENTS)),
    traffic_light_states = ego_response.traffic_lights_states
)

print("Begin stepping through simulation.")
for _ in range(NUM_TIME_STEPS):  # How many simulation time steps to execute (10 steps is 1 second)
##########################################################################################################    
    # INSERT YOUR OWN EGO PREDICTIONS FOR THIS TIME STEP
    ego_response = iai.drive(
        location = LOCATION,
        agent_properties = ego_agent_properties+iai_simulation.npc_properties,
        agent_states = ego_response.agent_states+iai_simulation.npc_states,
        recurrent_states = ego_response.recurrent_states+iai_simulation.npc_recurrent_states,
        light_recurrent_states = ego_response.light_recurrent_states,
    )
    ego_response.agent_states = ego_response.agent_states[:NUM_EGO_AGENTS]
    ego_response.recurrent_states = ego_response.recurrent_states[:NUM_EGO_AGENTS]
##########################################################################################################

    # Query the API for subsequent NPC predictions, informing it how the ego vehicle acted
    iai_simulation.step(
        current_ego_agent_states = ego_response.agent_states,
        traffic_lights_states = ego_response.traffic_lights_states
    )

    # Save the visualization with ScenePlotter
    scene_plotter.record_step(iai_simulation.agent_states,iai_simulation.light_states)

# Save the visualization to disk
print("Simulation finished, save visualization.")
fig, ax = plt.subplots(constrained_layout=True, figsize=(50, 50))
plt.axis('off')
gif_name = 'cosimulation_minimal_example.gif'
scene_plotter.animate_scene(
    output_name = gif_name,
    ax = ax,
    direction_vec = False,
    velocity_vec = False,
    plot_frame_number = True
)
print("Done")

To quickly check out how Inverted AI NPCs behave, try our Colab, where all agents are NPCs, or go to our github repository to execute it locally. When you're ready to try our NPCs with a real simulator, see the example CARLA integration. The examples are currently only provided in Python, but if you want to use the API from another language, you can use the REST API directly.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

invertedai-0.0.21.tar.gz (65.5 kB view details)

Uploaded Source

Built Distribution

invertedai-0.0.21-py3-none-any.whl (80.5 kB view details)

Uploaded Python 3

File details

Details for the file invertedai-0.0.21.tar.gz.

File metadata

  • Download URL: invertedai-0.0.21.tar.gz
  • Upload date:
  • Size: 65.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for invertedai-0.0.21.tar.gz
Algorithm Hash digest
SHA256 c32edbd251c0850976e9de942d975d78bdfee70b7b35d385578d2913c10c81e5
MD5 b8647c0944b9104412bf13263fb224da
BLAKE2b-256 6aafb16dca2b56ffd1ff38fc029a8bed0317ea3304775ae5ffc447ef9f3b7791

See more details on using hashes here.

File details

Details for the file invertedai-0.0.21-py3-none-any.whl.

File metadata

  • Download URL: invertedai-0.0.21-py3-none-any.whl
  • Upload date:
  • Size: 80.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for invertedai-0.0.21-py3-none-any.whl
Algorithm Hash digest
SHA256 ef0392b76f4baa5029506190867b30277c5cdf58f379fd578e07fa1f0ce81783
MD5 7049862452bf9bafdcc424773b333ec7
BLAKE2b-256 e6641744c2c7a8400067040bb07cc8cb7baabdfe958a96aab9e12fc6c9bbafef

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page