Skip to main content

Client SDK for InvertedAI

Project description

Documentation Status PyPI python-badge ci-badge Open In Colab

InvertedAI

Overview

Inverted AI provides an API for controlling non-playable characters (NPCs) in autonomous driving simulations, available as either a REST API or a Python SDK, (and C++ SDK) built on top of it. Using the API requires an access key - create an account on our user portal to get one. New users are given keys preloaded with an API access budget; researcher users affiliated to academic institutions generally receive a sufficient amount of credits to conduct their research for free. This page describes how to get started quickly. For more in-depth understanding, see the API usage guide, and detailed documentation for the REST API, the Python SDK, and the C++ SDK. To understand the underlying technology and why it's necessary for autonomous driving simulations, visit the Inverted AI website.

Getting started

Installation

For installing the Python package from PyPI:

pip install --upgrade invertedai

The Python client SDK is open source, so you can also download it and build locally.

Minimal example

import numpy as np
import matplotlib.pyplot as plt
import invertedai as iai

location = "iai:drake_street_and_pacific_blvd"  # select one of available locations

iai.add_apikey('')  # specify your key here or through the IAI_API_KEY variable

print("Begin initialization.")
# get static information about a given location including map in osm
# format and list traffic lights with their IDs and locations.
location_info_response = iai.location_info(location=location)

# initialize the simulation by spawning NPCs
response = iai.initialize(
    location=location,  # select one of available locations
    agent_count=10,    # number of NPCs to spawn
    get_birdview=True,  # provides simple visualization - don't use in production
    traffic_light_state_history=None
)
agent_attributes = response.agent_attributes  # get dimension and other attributes of NPCs

location_info_response = iai.location_info(location=location)
rendered_static_map = location_info_response.birdview_image.decode()
scene_plotter = iai.utils.ScenePlotter(
    rendered_static_map,
    location_info_response.map_fov,
    (location_info_response.map_center.x, location_info_response.map_center.y),
    location_info_response.static_actors
)
scene_plotter.initialize_recording(
    agent_states=response.agent_states,
    agent_attributes=agent_attributes,
)

print("Begin stepping through simulation.")
for _ in range(100):  # how many simulation steps to execute (10 steps is 1 second)

    # query the API for subsequent NPC predictions
    response = iai.drive(
        location=location,
        agent_attributes=agent_attributes,
        agent_states=response.agent_states,
        recurrent_states=response.recurrent_states,
        get_birdview=True,
        light_recurrent_states=response.light_recurrent_states,
    )

    # save the visualization
    scene_plotter.record_step(response.agent_states,response.traffic_lights_states)

print("Simulation finished, save visualization.")
# save the visualization to disk
fig, ax = plt.subplots(constrained_layout=True, figsize=(50, 50))
gif_name = 'minimal_example.gif'
scene_plotter.animate_scene(
    output_name=gif_name,
    ax=ax,
    direction_vec=False,
    velocity_vec=False,
    plot_frame_number=True
)
print("Done")

Stateful Cosimulation

Conceptually, the API is used to establish synchronous co-simulation between your own simulator running locally on your machine and the NPC engine running on Inverted AI servers. The basic integration in Python looks like this.

from typing import List
import numpy as np
import invertedai as iai
import matplotlib.pyplot as plt

iai.add_apikey('')  # specify your key here or through the IAI_API_KEY variable


class LocalSimulator:
    """
    Mock up of a local simulator, where you control the ego vehicle. This example only supports single ego vehicle.
    """

    def __init__(self, ego_state: iai.common.AgentState, npc_states: List[iai.common.AgentState]):
        self.ego_state = ego_state
        self.npc_states = npc_states

    def _step_ego(self):
        """
        The simple motion model drives forward with constant speed.
        The ego agent ignores the map and NPCs for simplicity.
        """
        dt = 0.1
        dx = self.ego_state.speed * dt * np.cos(self.ego_state.orientation)
        dy = self.ego_state.speed * dt * np.sin(self.ego_state.orientation)

        self.ego_state = iai.common.AgentState(
            center=iai.common.Point(x=self.ego_state.center.x + dx, y=self.ego_state.center.y + dy),
            orientation=self.ego_state.orientation,
            speed=self.ego_state.speed,
        )

    def step(self, predicted_npc_states):
        self._step_ego()  # ego vehicle moves first so that it doesn't see future NPC movement
        self.npc_states = predicted_npc_states
        return self.ego_state

print("Begin initialization.")
location = 'iai:ubc_roundabout'
iai_simulation = iai.BasicCosimulation(  # instantiate a stateful wrapper for Inverted AI API
    location=location,  # select one of available locations
    agent_count=5,  # how many vehicles in total to use in the simulation
    ego_agent_mask=[True, False, False, False, False],  # first vehicle is ego, rest are NPCs
    get_birdview=False,  # provides simple visualization - don't use in production
    traffic_lights=True,  # gets the traffic light states and used for initialization and steping the simulation
)

location_info_response = iai.location_info(location=location)
rendered_static_map = location_info_response.birdview_image.decode()
scene_plotter = iai.utils.ScenePlotter(
    rendered_static_map,
    location_info_response.map_fov,
    (location_info_response.map_center.x, location_info_response.map_center.y),
    location_info_response.static_actors
)
scene_plotter.initialize_recording(
    agent_states=iai_simulation.agent_states,
    agent_attributes=iai_simulation.agent_attributes,
)

print("Begin stepping through simulation.")
local_simulation = LocalSimulator(iai_simulation.ego_states[0], iai_simulation.npc_states)
for _ in range(100):  # how many simulation steps to execute (10 steps is 1 second)
    # query the API for subsequent NPC predictions, informing it how the ego vehicle acted
    iai_simulation.step([local_simulation.ego_state])
    # collect predictions for the next time step
    predicted_npc_behavior = iai_simulation.npc_states
    # execute predictions in your simulator, using your actions for the ego vehicle
    updated_ego_agent_state = local_simulation.step(predicted_npc_behavior)
    # save the visualization with ScenePlotter
    scene_plotter.record_step(iai_simulation.agent_states)

print("Simulation finished, save visualization.")
# save the visualization to disk
fig, ax = plt.subplots(constrained_layout=True, figsize=(50, 50))
gif_name = 'cosimulation_minimal_example.gif'
scene_plotter.animate_scene(
    output_name=gif_name,
    ax=ax,
    direction_vec=False,
    velocity_vec=False,
    plot_frame_number=True
)
print("Done")

To quickly check out how Inverted AI NPCs behave, try our Colab, where all agents are NPCs, or go to our github repository to execute it locally. When you're ready to try our NPCs with a real simulator, see the example CARLA integration. The examples are currently only provided in Python, but if you want to use the API from another language, you can use the REST API directly.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

invertedai-0.0.18.tar.gz (47.0 kB view details)

Uploaded Source

Built Distribution

invertedai-0.0.18-py3-none-any.whl (54.2 kB view details)

Uploaded Python 3

File details

Details for the file invertedai-0.0.18.tar.gz.

File metadata

  • Download URL: invertedai-0.0.18.tar.gz
  • Upload date:
  • Size: 47.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.5

File hashes

Hashes for invertedai-0.0.18.tar.gz
Algorithm Hash digest
SHA256 c59ff1f08e9369ce654d2140296c74364a113204451a09eebe0d6f248fa786d0
MD5 44cc5fb9dad7fc8dbbe91c0442b27fb3
BLAKE2b-256 5b85f966fa03cf68e13f5f9264aa128675a3aed9ae974e9e23413e1388f7a03c

See more details on using hashes here.

File details

Details for the file invertedai-0.0.18-py3-none-any.whl.

File metadata

  • Download URL: invertedai-0.0.18-py3-none-any.whl
  • Upload date:
  • Size: 54.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.5

File hashes

Hashes for invertedai-0.0.18-py3-none-any.whl
Algorithm Hash digest
SHA256 0e5e9b31d9a36697680ae419ba18e56f23709a7dcde2ec3aa4b8b4c734b61067
MD5 1f5abef510fcbd64f0837e274e1dff5f
BLAKE2b-256 9f5c3ee9f48df646aefae7edebef49d5f69a9be29834ccc0d5fdac738b1d959c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page