Client SDK for InvertedAI
Project description
InvertedAI
Overview
Inverted AI provides an API for controlling non-playable characters (NPCs) in autonomous driving simulations, available as either a REST API or a Python SDK, (and C++ SDK) built on top of it. Using the API requires an access key - contact us to get yours. This page describes how to get started quickly. For more in-depth understanding, see the API usage guide, and detailed documentation for the REST API, the Python SDK, and the C++ SDK. To understand the underlying technology and why it's necessary for autonomous driving simulations, visit the Inverted AI website.
Getting started
Installation
For installing the Python package from PyPI:
pip install --upgrade invertedai
The Python client SDK is open source, so you can also download it and build locally.
Minimal example
import numpy as np
import matplotlib.pyplot as plt
import invertedai as iai
location = "iai:drake_street_and_pacific_blvd" # select one of available locations
iai.add_apikey('') # specify your key here or through the IAI_API_KEY variable
print("Begin initialization.")
# get static information about a given location including map in osm
# format and list traffic lights with their IDs and locations.
location_info_response = iai.location_info(location=location)
# get traffic light states
light_response = iai.light(location=location)
# initialize the simulation by spawning NPCs
response = iai.initialize(
location=location, # select one of available locations
agent_count=10, # number of NPCs to spawn
get_birdview=True, # provides simple visualization - don't use in production
traffic_light_state_history=[light_response.traffic_lights_states], # provide traffic light states
)
agent_attributes = response.agent_attributes # get dimension and other attributes of NPCs
location_info_response = iai.location_info(location=location)
rendered_static_map = location_info_response.birdview_image.decode()
scene_plotter = iai.utils.ScenePlotter(rendered_static_map,
location_info_response.map_fov,
(location_info_response.map_center.x, location_info_response.map_center.y),
location_info_response.static_actors)
scene_plotter.initialize_recording(
agent_states=response.agent_states,
agent_attributes=agent_attributes,
)
print("Begin stepping through simulation.")
for _ in range(100): # how many simulation steps to execute (10 steps is 1 second)
# get next traffic light state
light_response = iai.light(location=location, recurrent_states=light_response.recurrent_states)
# query the API for subsequent NPC predictions
response = iai.drive(
location=location,
agent_attributes=agent_attributes,
agent_states=response.agent_states,
recurrent_states=response.recurrent_states,
get_birdview=True,
traffic_lights_states=light_response.traffic_lights_states,
)
# save the visualization
scene_plotter.record_step(response.agent_states,light_response.traffic_lights_states)
print("Simulation finished, save visualization.")
# save the visualization to disk
fig, ax = plt.subplots(constrained_layout=True, figsize=(50, 50))
gif_name = 'minimal_example.gif'
scene_plotter.animate_scene(
output_name=gif_name,
ax=ax,
direction_vec=False,
velocity_vec=False,
plot_frame_number=True
)
print("Done")
Stateful Cosimulation
Conceptually, the API is used to establish synchronous co-simulation between your own simulator running locally on your machine and the NPC engine running on Inverted AI servers. The basic integration in Python looks like this.
from typing import List
import numpy as np
import invertedai as iai
import matplotlib.pyplot as plt
iai.add_apikey('') # specify your key here or through the IAI_API_KEY variable
class LocalSimulator:
"""
Mock up of a local simulator, where you control the ego vehicle. This example only supports single ego vehicle.
"""
def __init__(self, ego_state: iai.common.AgentState, npc_states: List[iai.common.AgentState]):
self.ego_state = ego_state
self.npc_states = npc_states
def _step_ego(self):
"""
The simple motion model drives forward with constant speed.
The ego agent ignores the map and NPCs for simplicity.
"""
dt = 0.1
dx = self.ego_state.speed * dt * np.cos(self.ego_state.orientation)
dy = self.ego_state.speed * dt * np.sin(self.ego_state.orientation)
self.ego_state = iai.common.AgentState(
center=iai.common.Point(x=self.ego_state.center.x + dx, y=self.ego_state.center.y + dy),
orientation=self.ego_state.orientation,
speed=self.ego_state.speed,
)
def step(self, predicted_npc_states):
self._step_ego() # ego vehicle moves first so that it doesn't see future NPC movement
self.npc_states = predicted_npc_states
return self.ego_state
print("Begin initialization.")
location = 'iai:ubc_roundabout'
iai_simulation = iai.BasicCosimulation( # instantiate a stateful wrapper for Inverted AI API
location=location, # select one of available locations
agent_count=5, # how many vehicles in total to use in the simulation
ego_agent_mask=[True, False, False, False, False], # first vehicle is ego, rest are NPCs
get_birdview=False, # provides simple visualization - don't use in production
traffic_lights=True, # gets the traffic light states and used for initialization and steping the simulation
)
location_info_response = iai.location_info(location=location)
rendered_static_map = location_info_response.birdview_image.decode()
scene_plotter = iai.utils.ScenePlotter(rendered_static_map,
location_info_response.map_fov,
(location_info_response.map_center.x, location_info_response.map_center.y),
location_info_response.static_actors)
scene_plotter.initialize_recording(
agent_states=iai_simulation.agent_states,
agent_attributes=iai_simulation.agent_attributes,
)
print("Begin stepping through simulation.")
local_simulation = LocalSimulator(iai_simulation.ego_states[0], iai_simulation.npc_states)
for _ in range(100): # how many simulation steps to execute (10 steps is 1 second)
# query the API for subsequent NPC predictions, informing it how the ego vehicle acted
iai_simulation.step([local_simulation.ego_state])
# collect predictions for the next time step
predicted_npc_behavior = iai_simulation.npc_states
# execute predictions in your simulator, using your actions for the ego vehicle
updated_ego_agent_state = local_simulation.step(predicted_npc_behavior)
# save the visualization with ScenePlotter
scene_plotter.record_step(iai_simulation.agent_states)
print("Simulation finished, save visualization.")
# save the visualization to disk
fig, ax = plt.subplots(constrained_layout=True, figsize=(50, 50))
gif_name = 'cosimulation_minimal_example.gif'
scene_plotter.animate_scene(
output_name=gif_name,
ax=ax,
direction_vec=False,
velocity_vec=False,
plot_frame_number=True
)
print("Done")
To quickly check out how Inverted AI NPCs behave, try our Colab, where all agents are NPCs, or go to our github repository to execute it locally. When you're ready to try our NPCs with a real simulator, see the example CARLA integration. The examples are currently only provided in Python, but if you want to use the API from another language, you can use the REST API directly.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for invertedai-0.0.10-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 82bfb4127e620a79c2fbf2a0863728e76bf8a19de181421d95f7091ee6d2d548 |
|
MD5 | 01c6688bba5b486bd3e94ac578615260 |
|
BLAKE2b-256 | 7102e55a7741c50a4004a7f73673f0c0da974e5961864c1d11da5cd67db1220f |